diff options
author | Dave Jones <davej@redhat.com> | 2005-09-20 15:39:35 -0400 |
---|---|---|
committer | Dave Jones <davej@redhat.com> | 2005-09-20 15:39:35 -0400 |
commit | df8b59be0976c56820453730078bef99a8d1dbda (patch) | |
tree | a836bfc1588a20ca997b878bac82188240a471af /drivers | |
parent | c90eef84b8f5c7416cb99ab6907896f2dd392b4d (diff) |
[CPUFREQ] Avoid the ondemand cpufreq governor to use a too high frequency for stats.
The problem is in the ondemand governor, there is a periodic measurement
of the CPU usage. This CPU usage is updated by the scheduler after every
tick (basically, by adding 1 either to "idle" or to "user" or to
"system"). So if the frequency of the governor is too high, the stat
will be meaningless (as mostly no number have changed).
So this patch checks that the measurements are separated by at least 10
ticks. It means that by default, stats will have about 5% error (20
ticks). Of course those numbers can be argued but, IMHO, they look sane.
The patch also includes a small clean-up to check more explictly the
result of the conversion from ns to µs being null.
Let's note that (on x86) this has never been really needed before 2.6.13
because HZ was always 1000. Now that HZ can be 100, some CPU might be
affected by this problem. For instance when HZ=100, the centrino ,which
has a 10µs transition latency, would lead to the governor allowing to
read stats every tick (10ms)!
Signed-off-by: Eric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Dave Jones <davej@redhat.com>
Diffstat (limited to 'drivers')
-rw-r--r-- | drivers/cpufreq/cpufreq_ondemand.c | 18 |
1 files changed, 12 insertions, 6 deletions
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index c1fc9c62bb51..17741111246b 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c | |||
@@ -48,7 +48,10 @@ | |||
48 | * All times here are in uS. | 48 | * All times here are in uS. |
49 | */ | 49 | */ |
50 | static unsigned int def_sampling_rate; | 50 | static unsigned int def_sampling_rate; |
51 | #define MIN_SAMPLING_RATE (def_sampling_rate / 2) | 51 | #define MIN_SAMPLING_RATE_RATIO (2) |
52 | /* for correct statistics, we need at least 10 ticks between each measure */ | ||
53 | #define MIN_STAT_SAMPLING_RATE (MIN_SAMPLING_RATE_RATIO * jiffies_to_usecs(10)) | ||
54 | #define MIN_SAMPLING_RATE (def_sampling_rate / MIN_SAMPLING_RATE_RATIO) | ||
52 | #define MAX_SAMPLING_RATE (500 * def_sampling_rate) | 55 | #define MAX_SAMPLING_RATE (500 * def_sampling_rate) |
53 | #define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (1000) | 56 | #define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (1000) |
54 | #define DEF_SAMPLING_DOWN_FACTOR (1) | 57 | #define DEF_SAMPLING_DOWN_FACTOR (1) |
@@ -416,13 +419,16 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, | |||
416 | if (dbs_enable == 1) { | 419 | if (dbs_enable == 1) { |
417 | unsigned int latency; | 420 | unsigned int latency; |
418 | /* policy latency is in nS. Convert it to uS first */ | 421 | /* policy latency is in nS. Convert it to uS first */ |
422 | latency = policy->cpuinfo.transition_latency / 1000; | ||
423 | if (latency == 0) | ||
424 | latency = 1; | ||
419 | 425 | ||
420 | latency = policy->cpuinfo.transition_latency; | 426 | def_sampling_rate = latency * |
421 | if (latency < 1000) | ||
422 | latency = 1000; | ||
423 | |||
424 | def_sampling_rate = (latency / 1000) * | ||
425 | DEF_SAMPLING_RATE_LATENCY_MULTIPLIER; | 427 | DEF_SAMPLING_RATE_LATENCY_MULTIPLIER; |
428 | |||
429 | if (def_sampling_rate < MIN_STAT_SAMPLING_RATE) | ||
430 | def_sampling_rate = MIN_STAT_SAMPLING_RATE; | ||
431 | |||
426 | dbs_tuners_ins.sampling_rate = def_sampling_rate; | 432 | dbs_tuners_ins.sampling_rate = def_sampling_rate; |
427 | dbs_tuners_ins.ignore_nice = 0; | 433 | dbs_tuners_ins.ignore_nice = 0; |
428 | 434 | ||