diff options
author | 2023-02-21 18:24:12 -0800 | |
---|---|---|
committer | 2023-02-21 18:24:12 -0800 | |
commit | 5b7c4cabbb65f5c469464da6c5f614cbd7f730f2 (patch) | |
tree | cc5c2d0a898769fd59549594fedb3ee6f84e59a0 /tools/perf/util/stat.c | |
download | linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.tar.gz linux-5b7c4cabbb65f5c469464da6c5f614cbd7f730f2.zip |
Merge tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextgrafted
Pull networking updates from Jakub Kicinski:
"Core:
- Add dedicated kmem_cache for typical/small skb->head, avoid having
to access struct page at kfree time, and improve memory use.
- Introduce sysctl to set default RPS configuration for new netdevs.
- Define Netlink protocol specification format which can be used to
describe messages used by each family and auto-generate parsers.
Add tools for generating kernel data structures and uAPI headers.
- Expose all net/core sysctls inside netns.
- Remove 4s sleep in netpoll if carrier is instantly detected on
boot.
- Add configurable limit of MDB entries per port, and port-vlan.
- Continue populating drop reasons throughout the stack.
- Retire a handful of legacy Qdiscs and classifiers.
Protocols:
- Support IPv4 big TCP (TSO frames larger than 64kB).
- Add IP_LOCAL_PORT_RANGE socket option, to control local port range
on socket by socket basis.
- Track and report in procfs number of MPTCP sockets used.
- Support mixing IPv4 and IPv6 flows in the in-kernel MPTCP path
manager.
- IPv6: don't check net.ipv6.route.max_size and rely on garbage
collection to free memory (similarly to IPv4).
- Support Penultimate Segment Pop (PSP) flavor in SRv6 (RFC8986).
- ICMP: add per-rate limit counters.
- Add support for user scanning requests in ieee802154.
- Remove static WEP support.
- Support minimal Wi-Fi 7 Extremely High Throughput (EHT) rate
reporting.
- WiFi 7 EHT channel puncturing support (client & AP).
BPF:
- Add a rbtree data structure following the "next-gen data structure"
precedent set by recently added linked list, that is, by using
kfunc + kptr instead of adding a new BPF map type.
- Expose XDP hints via kfuncs with initial support for RX hash and
timestamp metadata.
- Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to
better support decap on GRE tunnel devices not operating in collect
metadata.
- Improve x86 JIT's codegen for PROBE_MEM runtime error checks.
- Remove the need for trace_printk_lock for bpf_trace_printk and
bpf_trace_vprintk helpers.
- Extend libbpf's bpf_tracing.h support for tracing arguments of
kprobes/uprobes and syscall as a special case.
- Significantly reduce the search time for module symbols by
livepatch and BPF.
- Enable cpumasks to be used as kptrs, which is useful for tracing
programs tracking which tasks end up running on which CPUs in
different time intervals.
- Add support for BPF trampoline on s390x and riscv64.
- Add capability to export the XDP features supported by the NIC.
- Add __bpf_kfunc tag for marking kernel functions as kfuncs.
- Add cgroup.memory=nobpf kernel parameter option to disable BPF
memory accounting for container environments.
Netfilter:
- Remove the CLUSTERIP target. It has been marked as obsolete for
years, and we still have WARN splats wrt races of the out-of-band
/proc interface installed by this target.
- Add 'destroy' commands to nf_tables. They are identical to the
existing 'delete' commands, but do not return an error if the
referenced object (set, chain, rule...) did not exist.
Driver API:
- Improve cpumask_local_spread() locality to help NICs set the right
IRQ affinity on AMD platforms.
- Separate C22 and C45 MDIO bus transactions more clearly.
- Introduce new DCB table to control DSCP rewrite on egress.
- Support configuration of Physical Layer Collision Avoidance (PLCA)
Reconciliation Sublayer (RS) (802.3cg-2019). Modern version of
shared medium Ethernet.
- Support for MAC Merge layer (IEEE 802.3-2018 clause 99). Allowing
preemption of low priority frames by high priority frames.
- Add support for controlling MACSec offload using netlink SET.
- Rework devlink instance refcounts to allow registration and
de-registration under the instance lock. Split the code into
multiple files, drop some of the unnecessarily granular locks and
factor out common parts of netlink operation handling.
- Add TX frame aggregation parameters (for USB drivers).
- Add a new attr TCA_EXT_WARN_MSG to report TC (offload) warning
messages with notifications for debug.
- Allow offloading of UDP NEW connections via act_ct.
- Add support for per action HW stats in TC.
- Support hardware miss to TC action (continue processing in SW from
a specific point in the action chain).
- Warn if old Wireless Extension user space interface is used with
modern cfg80211/mac80211 drivers. Do not support Wireless
Extensions for Wi-Fi 7 devices at all. Everyone should switch to
using nl80211 interface instead.
- Improve the CAN bit timing configuration. Use extack to return
error messages directly to user space, update the SJW handling,
including the definition of a new default value that will benefit
CAN-FD controllers, by increasing their oscillator tolerance.
New hardware / drivers:
- Ethernet:
- nVidia BlueField-3 support (control traffic driver)
- Ethernet support for imx93 SoCs
- Motorcomm yt8531 gigabit Ethernet PHY
- onsemi NCN26000 10BASE-T1S PHY (with support for PLCA)
- Microchip LAN8841 PHY (incl. cable diagnostics and PTP)
- Amlogic gxl MDIO mux
- WiFi:
- RealTek RTL8188EU (rtl8xxxu)
- Qualcomm Wi-Fi 7 devices (ath12k)
- CAN:
- Renesas R-Car V4H
Drivers:
- Bluetooth:
- Set Per Platform Antenna Gain (PPAG) for Intel controllers.
- Ethernet NICs:
- Intel (1G, igc):
- support TSN / Qbv / packet scheduling features of i226 model
- Intel (100G, ice):
- use GNSS subsystem instead of TTY
- multi-buffer XDP support
- extend support for GPIO pins to E823 devices
- nVidia/Mellanox:
- update the shared buffer configuration on PFC commands
- implement PTP adjphase function for HW offset control
- TC support for Geneve and GRE with VF tunnel offload
- more efficient crypto key management method
- multi-port eswitch support
- Netronome/Corigine:
- add DCB IEEE support
- support IPsec offloading for NFP3800
- Freescale/NXP (enetc):
- support XDP_REDIRECT for XDP non-linear buffers
- improve reconfig, avoid link flap and waiting for idle
- support MAC Merge layer
- Other NICs:
- sfc/ef100: add basic devlink support for ef100
- ionic: rx_push mode operation (writing descriptors via MMIO)
- bnxt: use the auxiliary bus abstraction for RDMA
- r8169: disable ASPM and reset bus in case of tx timeout
- cpsw: support QSGMII mode for J721e CPSW9G
- cpts: support pulse-per-second output
- ngbe: add an mdio bus driver
- usbnet: optimize usbnet_bh() by avoiding unnecessary queuing
- r8152: handle devices with FW with NCM support
- amd-xgbe: support 10Mbps, 2.5GbE speeds and rx-adaptation
- virtio-net: support multi buffer XDP
- virtio/vsock: replace virtio_vsock_pkt with sk_buff
- tsnep: XDP support
- Ethernet high-speed switches:
- nVidia/Mellanox (mlxsw):
- add support for latency TLV (in FW control messages)
- Microchip (sparx5):
- separate explicit and implicit traffic forwarding rules, make
the implicit rules always active
- add support for egress DSCP rewrite
- IS0 VCAP support (Ingress Classification)
- IS2 VCAP filters (protos, L3 addrs, L4 ports, flags, ToS
etc.)
- ES2 VCAP support (Egress Access Control)
- support for Per-Stream Filtering and Policing (802.1Q,
8.6.5.1)
- Ethernet embedded switches:
- Marvell (mv88e6xxx):
- add MAB (port auth) offload support
- enable PTP receive for mv88e6390
- NXP (ocelot):
- support MAC Merge layer
- support for the the vsc7512 internal copper phys
- Microchip:
- lan9303: convert to PHYLINK
- lan966x: support TC flower filter statistics
- lan937x: PTP support for KSZ9563/KSZ8563 and LAN937x
- lan937x: support Credit Based Shaper configuration
- ksz9477: support Energy Efficient Ethernet
- other:
- qca8k: convert to regmap read/write API, use bulk operations
- rswitch: Improve TX timestamp accuracy
- Intel WiFi (iwlwifi):
- EHT (Wi-Fi 7) rate reporting
- STEP equalizer support: transfer some STEP (connection to radio
on platforms with integrated wifi) related parameters from the
BIOS to the firmware.
- Qualcomm 802.11ax WiFi (ath11k):
- IPQ5018 support
- Fine Timing Measurement (FTM) responder role support
- channel 177 support
- MediaTek WiFi (mt76):
- per-PHY LED support
- mt7996: EHT (Wi-Fi 7) support
- Wireless Ethernet Dispatch (WED) reset support
- switch to using page pool allocator
- RealTek WiFi (rtw89):
- support new version of Bluetooth co-existance
- Mobile:
- rmnet: support TX aggregation"
* tag 'net-next-6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1872 commits)
page_pool: add a comment explaining the fragment counter usage
net: ethtool: fix __ethtool_dev_mm_supported() implementation
ethtool: pse-pd: Fix double word in comments
xsk: add linux/vmalloc.h to xsk.c
sefltests: netdevsim: wait for devlink instance after netns removal
selftest: fib_tests: Always cleanup before exit
net/mlx5e: Align IPsec ASO result memory to be as required by hardware
net/mlx5e: TC, Set CT miss to the specific ct action instance
net/mlx5e: Rename CHAIN_TO_REG to MAPPED_OBJ_TO_REG
net/mlx5: Refactor tc miss handling to a single function
net/mlx5: Kconfig: Make tc offload depend on tc skb extension
net/sched: flower: Support hardware miss to tc action
net/sched: flower: Move filter handle initialization earlier
net/sched: cls_api: Support hardware miss to tc action
net/sched: Rename user cookie and act cookie
sfc: fix builds without CONFIG_RTC_LIB
sfc: clean up some inconsistent indentings
net/mlx4_en: Introduce flexible array to silence overflow warning
net: lan966x: Fix possible deadlock inside PTP
net/ulp: Remove redundant ->clone() test in inet_clone_ulp().
...
Diffstat (limited to 'tools/perf/util/stat.c')
-rw-r--r-- | tools/perf/util/stat.c | 857 |
1 files changed, 857 insertions, 0 deletions
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c new file mode 100644 index 000000000..534d36d26 --- /dev/null +++ b/tools/perf/util/stat.c @@ -0,0 +1,857 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <errno.h> +#include <linux/err.h> +#include <inttypes.h> +#include <math.h> +#include <string.h> +#include "counts.h" +#include "cpumap.h" +#include "debug.h" +#include "header.h" +#include "stat.h" +#include "session.h" +#include "target.h" +#include "evlist.h" +#include "evsel.h" +#include "thread_map.h" +#include "util/hashmap.h" +#include <linux/zalloc.h> + +void update_stats(struct stats *stats, u64 val) +{ + double delta; + + stats->n++; + delta = val - stats->mean; + stats->mean += delta / stats->n; + stats->M2 += delta*(val - stats->mean); + + if (val > stats->max) + stats->max = val; + + if (val < stats->min) + stats->min = val; +} + +double avg_stats(struct stats *stats) +{ + return stats->mean; +} + +/* + * http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance + * + * (\Sum n_i^2) - ((\Sum n_i)^2)/n + * s^2 = ------------------------------- + * n - 1 + * + * http://en.wikipedia.org/wiki/Stddev + * + * The std dev of the mean is related to the std dev by: + * + * s + * s_mean = ------- + * sqrt(n) + * + */ +double stddev_stats(struct stats *stats) +{ + double variance, variance_mean; + + if (stats->n < 2) + return 0.0; + + variance = stats->M2 / (stats->n - 1); + variance_mean = variance / stats->n; + + return sqrt(variance_mean); +} + +double rel_stddev_stats(double stddev, double avg) +{ + double pct = 0.0; + + if (avg) + pct = 100.0 * stddev/avg; + + return pct; +} + +bool __perf_stat_evsel__is(struct evsel *evsel, enum perf_stat_evsel_id id) +{ + struct perf_stat_evsel *ps = evsel->stats; + + return ps->id == id; +} + +#define ID(id, name) [PERF_STAT_EVSEL_ID__##id] = #name +static const char *id_str[PERF_STAT_EVSEL_ID__MAX] = { + ID(NONE, x), + ID(CYCLES_IN_TX, cpu/cycles-t/), + ID(TRANSACTION_START, cpu/tx-start/), + ID(ELISION_START, cpu/el-start/), + ID(CYCLES_IN_TX_CP, cpu/cycles-ct/), + ID(TOPDOWN_TOTAL_SLOTS, topdown-total-slots), + ID(TOPDOWN_SLOTS_ISSUED, topdown-slots-issued), + ID(TOPDOWN_SLOTS_RETIRED, topdown-slots-retired), + ID(TOPDOWN_FETCH_BUBBLES, topdown-fetch-bubbles), + ID(TOPDOWN_RECOVERY_BUBBLES, topdown-recovery-bubbles), + ID(TOPDOWN_RETIRING, topdown-retiring), + ID(TOPDOWN_BAD_SPEC, topdown-bad-spec), + ID(TOPDOWN_FE_BOUND, topdown-fe-bound), + ID(TOPDOWN_BE_BOUND, topdown-be-bound), + ID(TOPDOWN_HEAVY_OPS, topdown-heavy-ops), + ID(TOPDOWN_BR_MISPREDICT, topdown-br-mispredict), + ID(TOPDOWN_FETCH_LAT, topdown-fetch-lat), + ID(TOPDOWN_MEM_BOUND, topdown-mem-bound), + ID(SMI_NUM, msr/smi/), + ID(APERF, msr/aperf/), +}; +#undef ID + +static void perf_stat_evsel_id_init(struct evsel *evsel) +{ + struct perf_stat_evsel *ps = evsel->stats; + int i; + + /* ps->id is 0 hence PERF_STAT_EVSEL_ID__NONE by default */ + + for (i = 0; i < PERF_STAT_EVSEL_ID__MAX; i++) { + if (!strcmp(evsel__name(evsel), id_str[i]) || + (strstr(evsel__name(evsel), id_str[i]) && evsel->pmu_name + && strstr(evsel__name(evsel), evsel->pmu_name))) { + ps->id = i; + break; + } + } +} + +static void evsel__reset_aggr_stats(struct evsel *evsel) +{ + struct perf_stat_evsel *ps = evsel->stats; + struct perf_stat_aggr *aggr = ps->aggr; + + if (aggr) + memset(aggr, 0, sizeof(*aggr) * ps->nr_aggr); +} + +static void evsel__reset_stat_priv(struct evsel *evsel) +{ + struct perf_stat_evsel *ps = evsel->stats; + + init_stats(&ps->res_stats); + evsel__reset_aggr_stats(evsel); +} + +static int evsel__alloc_aggr_stats(struct evsel *evsel, int nr_aggr) +{ + struct perf_stat_evsel *ps = evsel->stats; + + if (ps == NULL) + return 0; + + ps->nr_aggr = nr_aggr; + ps->aggr = calloc(nr_aggr, sizeof(*ps->aggr)); + if (ps->aggr == NULL) + return -ENOMEM; + + return 0; +} + +int evlist__alloc_aggr_stats(struct evlist *evlist, int nr_aggr) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) { + if (evsel__alloc_aggr_stats(evsel, nr_aggr) < 0) + return -1; + } + return 0; +} + +static int evsel__alloc_stat_priv(struct evsel *evsel, int nr_aggr) +{ + struct perf_stat_evsel *ps; + + ps = zalloc(sizeof(*ps)); + if (ps == NULL) + return -ENOMEM; + + evsel->stats = ps; + + if (nr_aggr && evsel__alloc_aggr_stats(evsel, nr_aggr) < 0) { + evsel->stats = NULL; + free(ps); + return -ENOMEM; + } + + perf_stat_evsel_id_init(evsel); + evsel__reset_stat_priv(evsel); + return 0; +} + +static void evsel__free_stat_priv(struct evsel *evsel) +{ + struct perf_stat_evsel *ps = evsel->stats; + + if (ps) { + zfree(&ps->aggr); + zfree(&ps->group_data); + } + zfree(&evsel->stats); +} + +static int evsel__alloc_prev_raw_counts(struct evsel *evsel) +{ + int cpu_map_nr = evsel__nr_cpus(evsel); + int nthreads = perf_thread_map__nr(evsel->core.threads); + struct perf_counts *counts; + + counts = perf_counts__new(cpu_map_nr, nthreads); + if (counts) + evsel->prev_raw_counts = counts; + + return counts ? 0 : -ENOMEM; +} + +static void evsel__free_prev_raw_counts(struct evsel *evsel) +{ + perf_counts__delete(evsel->prev_raw_counts); + evsel->prev_raw_counts = NULL; +} + +static void evsel__reset_prev_raw_counts(struct evsel *evsel) +{ + if (evsel->prev_raw_counts) + perf_counts__reset(evsel->prev_raw_counts); +} + +static int evsel__alloc_stats(struct evsel *evsel, int nr_aggr, bool alloc_raw) +{ + if (evsel__alloc_stat_priv(evsel, nr_aggr) < 0 || + evsel__alloc_counts(evsel) < 0 || + (alloc_raw && evsel__alloc_prev_raw_counts(evsel) < 0)) + return -ENOMEM; + + return 0; +} + +int evlist__alloc_stats(struct perf_stat_config *config, + struct evlist *evlist, bool alloc_raw) +{ + struct evsel *evsel; + int nr_aggr = 0; + + if (config && config->aggr_map) + nr_aggr = config->aggr_map->nr; + + evlist__for_each_entry(evlist, evsel) { + if (evsel__alloc_stats(evsel, nr_aggr, alloc_raw)) + goto out_free; + } + + return 0; + +out_free: + evlist__free_stats(evlist); + return -1; +} + +void evlist__free_stats(struct evlist *evlist) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) { + evsel__free_stat_priv(evsel); + evsel__free_counts(evsel); + evsel__free_prev_raw_counts(evsel); + } +} + +void evlist__reset_stats(struct evlist *evlist) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) { + evsel__reset_stat_priv(evsel); + evsel__reset_counts(evsel); + } +} + +void evlist__reset_aggr_stats(struct evlist *evlist) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) + evsel__reset_aggr_stats(evsel); +} + +void evlist__reset_prev_raw_counts(struct evlist *evlist) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) + evsel__reset_prev_raw_counts(evsel); +} + +static void evsel__copy_prev_raw_counts(struct evsel *evsel) +{ + int idx, nthreads = perf_thread_map__nr(evsel->core.threads); + + for (int thread = 0; thread < nthreads; thread++) { + perf_cpu_map__for_each_idx(idx, evsel__cpus(evsel)) { + *perf_counts(evsel->counts, idx, thread) = + *perf_counts(evsel->prev_raw_counts, idx, thread); + } + } +} + +void evlist__copy_prev_raw_counts(struct evlist *evlist) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) + evsel__copy_prev_raw_counts(evsel); +} + +static size_t pkg_id_hash(long __key, void *ctx __maybe_unused) +{ + uint64_t *key = (uint64_t *) __key; + + return *key & 0xffffffff; +} + +static bool pkg_id_equal(long __key1, long __key2, void *ctx __maybe_unused) +{ + uint64_t *key1 = (uint64_t *) __key1; + uint64_t *key2 = (uint64_t *) __key2; + + return *key1 == *key2; +} + +static int check_per_pkg(struct evsel *counter, struct perf_counts_values *vals, + int cpu_map_idx, bool *skip) +{ + struct hashmap *mask = counter->per_pkg_mask; + struct perf_cpu_map *cpus = evsel__cpus(counter); + struct perf_cpu cpu = perf_cpu_map__cpu(cpus, cpu_map_idx); + int s, d, ret = 0; + uint64_t *key; + + *skip = false; + + if (!counter->per_pkg) + return 0; + + if (perf_cpu_map__empty(cpus)) + return 0; + + if (!mask) { + mask = hashmap__new(pkg_id_hash, pkg_id_equal, NULL); + if (IS_ERR(mask)) + return -ENOMEM; + + counter->per_pkg_mask = mask; + } + + /* + * we do not consider an event that has not run as a good + * instance to mark a package as used (skip=1). Otherwise + * we may run into a situation where the first CPU in a package + * is not running anything, yet the second is, and this function + * would mark the package as used after the first CPU and would + * not read the values from the second CPU. + */ + if (!(vals->run && vals->ena)) + return 0; + + s = cpu__get_socket_id(cpu); + if (s < 0) + return -1; + + /* + * On multi-die system, die_id > 0. On no-die system, die_id = 0. + * We use hashmap(socket, die) to check the used socket+die pair. + */ + d = cpu__get_die_id(cpu); + if (d < 0) + return -1; + + key = malloc(sizeof(*key)); + if (!key) + return -ENOMEM; + + *key = (uint64_t)d << 32 | s; + if (hashmap__find(mask, key, NULL)) { + *skip = true; + free(key); + } else + ret = hashmap__add(mask, key, 1); + + return ret; +} + +static bool evsel__count_has_error(struct evsel *evsel, + struct perf_counts_values *count, + struct perf_stat_config *config) +{ + /* the evsel was failed already */ + if (evsel->err || evsel->counts->scaled == -1) + return true; + + /* this is meaningful for CPU aggregation modes only */ + if (config->aggr_mode == AGGR_GLOBAL) + return false; + + /* it's considered ok when it actually ran */ + if (count->ena != 0 && count->run != 0) + return false; + + return true; +} + +static int +process_counter_values(struct perf_stat_config *config, struct evsel *evsel, + int cpu_map_idx, int thread, + struct perf_counts_values *count) +{ + struct perf_stat_evsel *ps = evsel->stats; + static struct perf_counts_values zero; + bool skip = false; + + if (check_per_pkg(evsel, count, cpu_map_idx, &skip)) { + pr_err("failed to read per-pkg counter\n"); + return -1; + } + + if (skip) + count = &zero; + + if (!evsel->snapshot) + evsel__compute_deltas(evsel, cpu_map_idx, thread, count); + perf_counts_values__scale(count, config->scale, NULL); + + if (config->aggr_mode == AGGR_THREAD) { + struct perf_counts_values *aggr_counts = &ps->aggr[thread].counts; + + /* + * Skip value 0 when enabling --per-thread globally, + * otherwise too many 0 output. + */ + if (count->val == 0 && config->system_wide) + return 0; + + ps->aggr[thread].nr++; + + aggr_counts->val += count->val; + aggr_counts->ena += count->ena; + aggr_counts->run += count->run; + return 0; + } + + if (ps->aggr) { + struct perf_cpu cpu = perf_cpu_map__cpu(evsel->core.cpus, cpu_map_idx); + struct aggr_cpu_id aggr_id = config->aggr_get_id(config, cpu); + struct perf_stat_aggr *ps_aggr; + int i; + + for (i = 0; i < ps->nr_aggr; i++) { + if (!aggr_cpu_id__equal(&aggr_id, &config->aggr_map->map[i])) + continue; + + ps_aggr = &ps->aggr[i]; + ps_aggr->nr++; + + /* + * When any result is bad, make them all to give consistent output + * in interval mode. But per-task counters can have 0 enabled time + * when some tasks are idle. + */ + if (evsel__count_has_error(evsel, count, config) && !ps_aggr->failed) { + ps_aggr->counts.val = 0; + ps_aggr->counts.ena = 0; + ps_aggr->counts.run = 0; + ps_aggr->failed = true; + } + + if (!ps_aggr->failed) { + ps_aggr->counts.val += count->val; + ps_aggr->counts.ena += count->ena; + ps_aggr->counts.run += count->run; + } + break; + } + } + + return 0; +} + +static int process_counter_maps(struct perf_stat_config *config, + struct evsel *counter) +{ + int nthreads = perf_thread_map__nr(counter->core.threads); + int ncpus = evsel__nr_cpus(counter); + int idx, thread; + + for (thread = 0; thread < nthreads; thread++) { + for (idx = 0; idx < ncpus; idx++) { + if (process_counter_values(config, counter, idx, thread, + perf_counts(counter->counts, idx, thread))) + return -1; + } + } + + return 0; +} + +int perf_stat_process_counter(struct perf_stat_config *config, + struct evsel *counter) +{ + struct perf_stat_evsel *ps = counter->stats; + u64 *count; + int ret; + + if (counter->per_pkg) + evsel__zero_per_pkg(counter); + + ret = process_counter_maps(config, counter); + if (ret) + return ret; + + if (config->aggr_mode != AGGR_GLOBAL) + return 0; + + /* + * GLOBAL aggregation mode only has a single aggr counts, + * so we can use ps->aggr[0] as the actual output. + */ + count = ps->aggr[0].counts.values; + update_stats(&ps->res_stats, *count); + + if (verbose > 0) { + fprintf(config->output, "%s: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n", + evsel__name(counter), count[0], count[1], count[2]); + } + + return 0; +} + +static int evsel__merge_aggr_counters(struct evsel *evsel, struct evsel *alias) +{ + struct perf_stat_evsel *ps_a = evsel->stats; + struct perf_stat_evsel *ps_b = alias->stats; + int i; + + if (ps_a->aggr == NULL && ps_b->aggr == NULL) + return 0; + + if (ps_a->nr_aggr != ps_b->nr_aggr) { + pr_err("Unmatched aggregation mode between aliases\n"); + return -1; + } + + for (i = 0; i < ps_a->nr_aggr; i++) { + struct perf_counts_values *aggr_counts_a = &ps_a->aggr[i].counts; + struct perf_counts_values *aggr_counts_b = &ps_b->aggr[i].counts; + + /* NB: don't increase aggr.nr for aliases */ + + aggr_counts_a->val += aggr_counts_b->val; + aggr_counts_a->ena += aggr_counts_b->ena; + aggr_counts_a->run += aggr_counts_b->run; + } + + return 0; +} +/* events should have the same name, scale, unit, cgroup but on different PMUs */ +static bool evsel__is_alias(struct evsel *evsel_a, struct evsel *evsel_b) +{ + if (strcmp(evsel__name(evsel_a), evsel__name(evsel_b))) + return false; + + if (evsel_a->scale != evsel_b->scale) + return false; + + if (evsel_a->cgrp != evsel_b->cgrp) + return false; + + if (strcmp(evsel_a->unit, evsel_b->unit)) + return false; + + if (evsel__is_clock(evsel_a) != evsel__is_clock(evsel_b)) + return false; + + return !!strcmp(evsel_a->pmu_name, evsel_b->pmu_name); +} + +static void evsel__merge_aliases(struct evsel *evsel) +{ + struct evlist *evlist = evsel->evlist; + struct evsel *alias; + + alias = list_prepare_entry(evsel, &(evlist->core.entries), core.node); + list_for_each_entry_continue(alias, &evlist->core.entries, core.node) { + /* Merge the same events on different PMUs. */ + if (evsel__is_alias(evsel, alias)) { + evsel__merge_aggr_counters(evsel, alias); + alias->merged_stat = true; + } + } +} + +static bool evsel__should_merge_hybrid(const struct evsel *evsel, + const struct perf_stat_config *config) +{ + return config->hybrid_merge && evsel__is_hybrid(evsel); +} + +static void evsel__merge_stats(struct evsel *evsel, struct perf_stat_config *config) +{ + /* this evsel is already merged */ + if (evsel->merged_stat) + return; + + if (evsel->auto_merge_stats || evsel__should_merge_hybrid(evsel, config)) + evsel__merge_aliases(evsel); +} + +/* merge the same uncore and hybrid events if requested */ +void perf_stat_merge_counters(struct perf_stat_config *config, struct evlist *evlist) +{ + struct evsel *evsel; + + if (config->no_merge) + return; + + evlist__for_each_entry(evlist, evsel) + evsel__merge_stats(evsel, config); +} + +static void evsel__update_percore_stats(struct evsel *evsel, struct aggr_cpu_id *core_id) +{ + struct perf_stat_evsel *ps = evsel->stats; + struct perf_counts_values counts = { 0, }; + struct aggr_cpu_id id; + struct perf_cpu cpu; + int idx; + + /* collect per-core counts */ + perf_cpu_map__for_each_cpu(cpu, idx, evsel->core.cpus) { + struct perf_stat_aggr *aggr = &ps->aggr[idx]; + + id = aggr_cpu_id__core(cpu, NULL); + if (!aggr_cpu_id__equal(core_id, &id)) + continue; + + counts.val += aggr->counts.val; + counts.ena += aggr->counts.ena; + counts.run += aggr->counts.run; + } + + /* update aggregated per-core counts for each CPU */ + perf_cpu_map__for_each_cpu(cpu, idx, evsel->core.cpus) { + struct perf_stat_aggr *aggr = &ps->aggr[idx]; + + id = aggr_cpu_id__core(cpu, NULL); + if (!aggr_cpu_id__equal(core_id, &id)) + continue; + + aggr->counts.val = counts.val; + aggr->counts.ena = counts.ena; + aggr->counts.run = counts.run; + + aggr->used = true; + } +} + +/* we have an aggr_map for cpu, but want to aggregate the counters per-core */ +static void evsel__process_percore(struct evsel *evsel) +{ + struct perf_stat_evsel *ps = evsel->stats; + struct aggr_cpu_id core_id; + struct perf_cpu cpu; + int idx; + + if (!evsel->percore) + return; + + perf_cpu_map__for_each_cpu(cpu, idx, evsel->core.cpus) { + struct perf_stat_aggr *aggr = &ps->aggr[idx]; + + if (aggr->used) + continue; + + core_id = aggr_cpu_id__core(cpu, NULL); + evsel__update_percore_stats(evsel, &core_id); + } +} + +/* process cpu stats on per-core events */ +void perf_stat_process_percore(struct perf_stat_config *config, struct evlist *evlist) +{ + struct evsel *evsel; + + if (config->aggr_mode != AGGR_NONE) + return; + + evlist__for_each_entry(evlist, evsel) + evsel__process_percore(evsel); +} + +static void evsel__update_shadow_stats(struct evsel *evsel) +{ + struct perf_stat_evsel *ps = evsel->stats; + int i; + + if (ps->aggr == NULL) + return; + + for (i = 0; i < ps->nr_aggr; i++) { + struct perf_counts_values *aggr_counts = &ps->aggr[i].counts; + + perf_stat__update_shadow_stats(evsel, aggr_counts->val, i, &rt_stat); + } +} + +void perf_stat_process_shadow_stats(struct perf_stat_config *config __maybe_unused, + struct evlist *evlist) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) + evsel__update_shadow_stats(evsel); +} + +int perf_event__process_stat_event(struct perf_session *session, + union perf_event *event) +{ + struct perf_counts_values count, *ptr; + struct perf_record_stat *st = &event->stat; + struct evsel *counter; + int cpu_map_idx; + + count.val = st->val; + count.ena = st->ena; + count.run = st->run; + + counter = evlist__id2evsel(session->evlist, st->id); + if (!counter) { + pr_err("Failed to resolve counter for stat event.\n"); + return -EINVAL; + } + cpu_map_idx = perf_cpu_map__idx(evsel__cpus(counter), (struct perf_cpu){.cpu = st->cpu}); + if (cpu_map_idx == -1) { + pr_err("Invalid CPU %d for event %s.\n", st->cpu, evsel__name(counter)); + return -EINVAL; + } + ptr = perf_counts(counter->counts, cpu_map_idx, st->thread); + if (ptr == NULL) { + pr_err("Failed to find perf count for CPU %d thread %d on event %s.\n", + st->cpu, st->thread, evsel__name(counter)); + return -EINVAL; + } + *ptr = count; + counter->supported = true; + return 0; +} + +size_t perf_event__fprintf_stat(union perf_event *event, FILE *fp) +{ + struct perf_record_stat *st = (struct perf_record_stat *)event; + size_t ret; + + ret = fprintf(fp, "\n... id %" PRI_lu64 ", cpu %d, thread %d\n", + st->id, st->cpu, st->thread); + ret += fprintf(fp, "... value %" PRI_lu64 ", enabled %" PRI_lu64 ", running %" PRI_lu64 "\n", + st->val, st->ena, st->run); + + return ret; +} + +size_t perf_event__fprintf_stat_round(union perf_event *event, FILE *fp) +{ + struct perf_record_stat_round *rd = (struct perf_record_stat_round *)event; + size_t ret; + + ret = fprintf(fp, "\n... time %" PRI_lu64 ", type %s\n", rd->time, + rd->type == PERF_STAT_ROUND_TYPE__FINAL ? "FINAL" : "INTERVAL"); + + return ret; +} + +size_t perf_event__fprintf_stat_config(union perf_event *event, FILE *fp) +{ + struct perf_stat_config sc; + size_t ret; + + perf_event__read_stat_config(&sc, &event->stat_config); + + ret = fprintf(fp, "\n"); + ret += fprintf(fp, "... aggr_mode %d\n", sc.aggr_mode); + ret += fprintf(fp, "... scale %d\n", sc.scale); + ret += fprintf(fp, "... interval %u\n", sc.interval); + + return ret; +} + +int create_perf_stat_counter(struct evsel *evsel, + struct perf_stat_config *config, + struct target *target, + int cpu_map_idx) +{ + struct perf_event_attr *attr = &evsel->core.attr; + struct evsel *leader = evsel__leader(evsel); + + attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED | + PERF_FORMAT_TOTAL_TIME_RUNNING; + + /* + * The event is part of non trivial group, let's enable + * the group read (for leader) and ID retrieval for all + * members. + */ + if (leader->core.nr_members > 1) + attr->read_format |= PERF_FORMAT_ID|PERF_FORMAT_GROUP; + + attr->inherit = !config->no_inherit && list_empty(&evsel->bpf_counter_list); + + /* + * Some events get initialized with sample_(period/type) set, + * like tracepoints. Clear it up for counting. + */ + attr->sample_period = 0; + + if (config->identifier) + attr->sample_type = PERF_SAMPLE_IDENTIFIER; + + if (config->all_user) { + attr->exclude_kernel = 1; + attr->exclude_user = 0; + } + + if (config->all_kernel) { + attr->exclude_kernel = 0; + attr->exclude_user = 1; + } + + /* + * Disabling all counters initially, they will be enabled + * either manually by us or by kernel via enable_on_exec + * set later. + */ + if (evsel__is_group_leader(evsel)) { + attr->disabled = 1; + + /* + * In case of initial_delay we enable tracee + * events manually. + */ + if (target__none(target) && !config->initial_delay) + attr->enable_on_exec = 1; + } + + if (target__has_cpu(target) && !target__has_per_thread(target)) + return evsel__open_per_cpu(evsel, evsel__cpus(evsel), cpu_map_idx); + + return evsel__open_per_thread(evsel, evsel->core.threads); +} |