diff options
author | Nick Piggin <nickpiggin@yahoo.com.au> | 2008-09-30 06:50:27 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-09-30 06:56:25 -0400 |
commit | 7317d7b87edb41a9135e30be1ec3f7ef817c53dd (patch) | |
tree | e8375c49cbeb8012e55967f325be99b01ab66648 /kernel | |
parent | 6918bc5c830e890681eabb3c6cb6b8d117a52d14 (diff) |
sched: improve preempt debugging
This patch helped me out with a problem I recently had....
Basically, when the kernel lock is held, then preempt_count underflow does not
get detected until it is released which may be a long time (and arbitrarily,
eg at different points it may be rescheduled). If the bkl is released at
schedule, the resulting output is actually fairly cryptic...
With any other lock that elevates preempt_count, it is illegal to schedule
under it (which would get found pretty quickly). bkl allows scheduling with
preempt_count elevated, which makes underflows hard to debug.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 98890807375b..ec3bd1f398b3 100644 --- a/kernel/sched.c +++ b/kernel/sched.c | |||
@@ -4305,7 +4305,7 @@ void __kprobes sub_preempt_count(int val) | |||
4305 | /* | 4305 | /* |
4306 | * Underflow? | 4306 | * Underflow? |
4307 | */ | 4307 | */ |
4308 | if (DEBUG_LOCKS_WARN_ON(val > preempt_count())) | 4308 | if (DEBUG_LOCKS_WARN_ON(val > preempt_count() - (!!kernel_locked()))) |
4309 | return; | 4309 | return; |
4310 | /* | 4310 | /* |
4311 | * Is the spinlock portion underflowing? | 4311 | * Is the spinlock portion underflowing? |