diff options
author | Leon Yu <leoyu@nvidia.com> | 2019-08-21 09:02:12 -0400 |
---|---|---|
committer | mobile promotions <svcmobile_promotions@nvidia.com> | 2019-08-30 04:25:01 -0400 |
commit | d601ff515902b4840109742f5f2ac8881058405c (patch) | |
tree | c90ceedd7b131d774d5acde6f1872c6536c46423 /drivers/gpu/nvgpu/common/pmu/pmu_perfmon.c | |
parent | 84f48df530a6ce0423920ebd8fd4e1dde52dd09f (diff) |
nvgpu: don't report max load when counter overflow
This is to prevent GPU (and thus EMC) frequency from being boosted
from time to time when system is completely idle. It's caused by max
GPU load being incorrectly reported by perfmon. When the issue
happens, it can be observed that max load is reported but busy_cycles
read from PMU is actually zero.
Even though busy and total cycles returned by PMU may not be
completely accurate when counter overflows, the counters
accumulated so far still have some value that we shouldn't ignore.
OTOH, returning max load could be the least accurate approximation in
such cases. So let's just clear the interrupt status and let rest of
the code handle the exception cases.
Bug 200545546
Change-Id: I6882ae265029e881f5417fb2b82005b0112b0fda
Signed-off-by: Leon Yu <leoyu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2180771
Reviewed-by: Peng Liu <pengliu@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mubushir Rahman <mubushirr@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Diffstat (limited to 'drivers/gpu/nvgpu/common/pmu/pmu_perfmon.c')
-rw-r--r-- | drivers/gpu/nvgpu/common/pmu/pmu_perfmon.c | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/drivers/gpu/nvgpu/common/pmu/pmu_perfmon.c b/drivers/gpu/nvgpu/common/pmu/pmu_perfmon.c index bf07bd79..175482a3 100644 --- a/drivers/gpu/nvgpu/common/pmu/pmu_perfmon.c +++ b/drivers/gpu/nvgpu/common/pmu/pmu_perfmon.c | |||
@@ -1,5 +1,5 @@ | |||
1 | /* | 1 | /* |
2 | * Copyright (c) 2017-2018, NVIDIA CORPORATION. All rights reserved. | 2 | * Copyright (c) 2017-2019, NVIDIA CORPORATION. All rights reserved. |
3 | * | 3 | * |
4 | * Permission is hereby granted, free of charge, to any person obtaining a | 4 | * Permission is hereby granted, free of charge, to any person obtaining a |
5 | * copy of this software and associated documentation files (the "Software"), | 5 | * copy of this software and associated documentation files (the "Software"), |
@@ -263,9 +263,10 @@ int nvgpu_pmu_busy_cycles_norm(struct gk20a *g, u32 *norm) | |||
263 | g->ops.pmu.pmu_reset_idle_counter(g, 0); | 263 | g->ops.pmu.pmu_reset_idle_counter(g, 0); |
264 | 264 | ||
265 | if (intr_status != 0UL) { | 265 | if (intr_status != 0UL) { |
266 | *norm = PMU_BUSY_CYCLES_NORM_MAX; | ||
267 | g->ops.pmu.pmu_clear_idle_intr_status(g); | 266 | g->ops.pmu.pmu_clear_idle_intr_status(g); |
268 | } else if (total_cycles == 0ULL || busy_cycles > total_cycles) { | 267 | } |
268 | |||
269 | if (total_cycles == 0ULL || busy_cycles > total_cycles) { | ||
269 | *norm = PMU_BUSY_CYCLES_NORM_MAX; | 270 | *norm = PMU_BUSY_CYCLES_NORM_MAX; |
270 | } else { | 271 | } else { |
271 | *norm = (u32)(busy_cycles * PMU_BUSY_CYCLES_NORM_MAX | 272 | *norm = (u32)(busy_cycles * PMU_BUSY_CYCLES_NORM_MAX |