diff options
author | Daniel Vetter <daniel.vetter@ffwll.ch> | 2011-09-08 08:00:21 -0400 |
---|---|---|
committer | Keith Packard <keithp@keithp.com> | 2011-10-20 17:11:17 -0400 |
commit | a9e2641dee52cae2db7688a749344365642a5e79 (patch) | |
tree | 55db91e129c293ab51f55702118abff2105edee0 /drivers/gpu | |
parent | 4fb066ab9ef3111c86d9fb8f13f1178885cf7f1c (diff) |
drm/i915: close PM interrupt masking races in the rps work func
This patch closes the following race:
We get a PM interrupt A, mask it, set dev_priv->iir = PM_A and kick of the
work item. Scheduler isn't grumpy, so the work queue takes rps_lock,
grabs pm_iir = dev_priv->pm_iir and pm_imr = READ(PMIMR). Note that
pm_imr == pm_iir because we've just masked the interrupt we've got.
Now hw sends out PM interrupt B (not masked), we process it and mask
it. Later on the irq handler also clears PMIIR.
Then the work item proceeds and at the end clears PMIMR. Because
(local) pm_imr == pm_iir we have
pm_imr & ~pm_iir == 0
so all interrupts are enabled.
Hardware is still interrupt-happy, and sends out a new PM interrupt B.
PMIMR doesn't mask B (it does not mask anything), PMIIR is cleared, so
we get it and hit the WARN in the interrupt handler (because
dev_priv->pm_iir == PM_B).
That's why I've moved the
WRITE(PMIMR, 0)
up under the protection of the rps_lock. And write an uncoditional 0
to PMIMR, because that's what we'll do anyway.
This races looks much more likely because we can arbitrarily extend
the window by grabing dev->struct mutex right after the irq handler
has processed the first PM_B interrupt.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Keith Packard <keithp@keithp.com>
Diffstat (limited to 'drivers/gpu')
-rw-r--r-- | drivers/gpu/drm/i915/i915_irq.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 98eedddc7440..9ee2729fe5c6 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c | |||
@@ -383,6 +383,7 @@ static void gen6_pm_rps_work(struct work_struct *work) | |||
383 | pm_iir = dev_priv->pm_iir; | 383 | pm_iir = dev_priv->pm_iir; |
384 | dev_priv->pm_iir = 0; | 384 | dev_priv->pm_iir = 0; |
385 | pm_imr = I915_READ(GEN6_PMIMR); | 385 | pm_imr = I915_READ(GEN6_PMIMR); |
386 | I915_WRITE(GEN6_PMIMR, 0); | ||
386 | spin_unlock_irq(&dev_priv->rps_lock); | 387 | spin_unlock_irq(&dev_priv->rps_lock); |
387 | 388 | ||
388 | if (!pm_iir) | 389 | if (!pm_iir) |
@@ -420,7 +421,6 @@ static void gen6_pm_rps_work(struct work_struct *work) | |||
420 | * an *extremely* unlikely race with gen6_rps_enable() that is prevented | 421 | * an *extremely* unlikely race with gen6_rps_enable() that is prevented |
421 | * by holding struct_mutex for the duration of the write. | 422 | * by holding struct_mutex for the duration of the write. |
422 | */ | 423 | */ |
423 | I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir); | ||
424 | mutex_unlock(&dev_priv->dev->struct_mutex); | 424 | mutex_unlock(&dev_priv->dev->struct_mutex); |
425 | } | 425 | } |
426 | 426 | ||