The cpc provider makes available probes associated with CPU performance counter events. A probe fires when a specified number of events of a given type in a chosen processor mode have occurred. When a probe fires we can sample aspects of system state and inferences can be made about system behaviour. Accurate inferences are possible when high enough sampling rates and/or long sampling times are employed.
Probes made available by the cpc provider have the format of cpc:::<event name>-<mode>-<optional mask>-<count>. The definitions of the components of the probename are listed in table Table 1.
|event name||The platform specific or generic event name. A full list of events can be obtained using the -h option to cpustat(1M).|
|mode||The privilege mode in which to count events. Valid modes are "user" for user mode events, "kernel" for kernel mode events and "all" for both user mode and kernel mode events.|
|optional mask||On some platforms it is possible to specify a mask (commonly referred to as a unit mask or an event mask) to further refine a platform specific event specification. This field is optional and can only be specified for platform specific events. Specified as a hex value.|
|count||The number of events that must occur on a CPU for a probe to be fired on that CPU.|
The following introductory example fires a probe on a CPU for every 10000 user-mode Level 1 instruction cache misses on a SPARC platform. When the probe fires we record the name of the executable that was on processor at the time the probe fires (see Examples section for further examples):
When working with the cpc provider it is important to remember that the state available when a probe fires is valid for the performance counter event that caused the probe to fire and not for all events counted with that probe. In the above output we see that the firefox-bin application caused the cpc:::IC_miss-user-10000 probe to fire 2060 times. As this probe fires once for every 10000 level 1 instruction cache misses on a CPU, the firefox-bin application could have contributed anywhere from 2060 to 20600000 of these misses.
The arguments to cpc probes are listed in Table 2.
|arg0||The program counter (PC) in the kernel at the time that the probe fired, or 0 if the current process was not executing in the kernel at the time that the probe fired|
|arg1||The PC in the user-level process at the time that the probe fired, or 0 if the current process was executing at the kernel at the time that the probe fired|
As the descriptions imply, if arg0 is non-zero then arg1 is zero; if arg0 is zero then arg1 is non-zero.
CPU performance counters are a finite resource and the number of probes that can be enabled depends upon hardware capabilities. Processors that cannot determine which counter has overflowed when multiple counters are programmed (e.g. AMD, UltraSPARC) are only allowed to have a single enabling at any one time. On such platforms, consumers attempting to enable more than 1 probe will fail as will consumers attempting to enable a probe when a disparate enabling already exists. Processors that can detect which counter has overflowed (e.g. Niagara2, Intel P4) are allowed to have as many probes enabled as the hardware will allow. This will be, at most, the number of counters available on a processor. On such configurations, multiple probes can be enabled at any one time.
Probes are enabled by consumers on a first-come, first-served basis. When hardware resources are fully utilised subsequent enablings will fail until resources become available.
Like the profile provider, the cpc provider creates probes dynamically on an as-needed basis. Thus, the desired cpc probe might not appear in a listing of all probes (for example, by using dtrace -l -P cpc) but the probe will be created when it is explicitly enabled.
Specifying a small event overflow count for frequently occurring events (e.g. cycle count, instructions executed) would quickly render the system unusable as a processor would be continuously servicing performance counter overflow interrupts. To prevent this situation, the smallest overflow count that can be specified for any probe is set, by default, at 5000. This can be altered by adjusting the dcpc-min-overflow variable in the /kernel/drv/dcpc.conf configuration file and then unloading and reloading the dcpc driver.
Care should be taken when specifying high frequency events such as instructions executed or cycle count. For example, measuring busy cycles on a fully utilized 3GHz processor with a count of 50000 would generate approximately 65000 interrupts/sec. This rate of interrupt delivery could degrade system performance to some degree.
The provider has priority over per-LWP libcpc usage (i.e. cputrack) for access to counters. In the same manner as cpustat, enabling probes causes all existing per-LWP counter contexts to be invalidated. As long as these enablings remain active, the counters will remain unavailable to cputrack-type consumers.
Only one of cpustat and DTrace may use the counter hardware at any one time. Ownership of the counters is given on a first-come, first-served basis.
Some simple examples of cpc provider usage follow.
The simple script displays instructions executed by applications on an AMD platform
The following example shows a kernel profiled by cycle usage on an AMD platform.
In this example we are looking at user-mode L2 cache misses and the functions that generated them on an AMD platform. The predicate ensures that we only sample function names when the probe was fired by the 'brendan' executable.
Here we use the same example as about but we use the much simpler generic event PAPI_l2_dcm to indicate our interest in L2 data cache misses instead of the platform event.
The cpc provider uses DTrace's stability mechanism to describe its stabilities as shown in the following table. For more information about the stability mechanism, see Chapter 39, Stability.
|Element||Name stability||Data stability||Dependency class|