diff options
| author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2010-01-26 12:50:16 -0500 |
|---|---|---|
| committer | Ingo Molnar <mingo@elte.hu> | 2010-01-27 02:39:33 -0500 |
| commit | abd50713944c8ea9e0af5b7bffa0aacae21cc91a (patch) | |
| tree | c75a352aa13821a41791877f25d2f048568827b0 /include/linux | |
| parent | ef12a141306c90336a3a10d40213ecd98624d274 (diff) | |
perf: Reimplement frequency driven sampling
There was a bug in the old period code that caused intel_pmu_enable_all()
or native_write_msr_safe() to show up quite high in the profiles.
In staring at that code it made my head hurt, so I rewrote it in a
hopefully simpler fashion. Its now fully symetric between tick and
overflow driven adjustments and uses less data to boot.
The only complication is that it basically wants to do a u128 division.
The code approximates that in a rather simple truncate until it fits
fashion, taking care to balance the terms while truncating.
This version does not generate that sampling artefact.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/perf_event.h | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index c6f812e4d058..72b2615600d8 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h | |||
| @@ -498,9 +498,8 @@ struct hw_perf_event { | |||
| 498 | atomic64_t period_left; | 498 | atomic64_t period_left; |
| 499 | u64 interrupts; | 499 | u64 interrupts; |
| 500 | 500 | ||
| 501 | u64 freq_count; | 501 | u64 freq_time_stamp; |
| 502 | u64 freq_interrupts; | 502 | u64 freq_count_stamp; |
| 503 | u64 freq_stamp; | ||
| 504 | #endif | 503 | #endif |
| 505 | }; | 504 | }; |
| 506 | 505 | ||
