diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2008-04-04 17:38:17 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-04-04 17:38:17 -0400 |
commit | 1be62dc190ebaca331038962c873e7967de6cc4b (patch) | |
tree | a8eb0a106bf362819d2bb0fc602b0e52df6a4198 /fs | |
parent | 4ed919014eb2b591eb8fdd4dd00226a65faddef4 (diff) |
Be more careful about marking buffers dirty
Mikulas Patocka noted that the optimization where we check if a buffer
was already dirty (and we avoid re-dirtying it) was not really SMP-safe.
Since the read of the old status was not synchronized with anything, an
aggressive CPU re-ordering of memory accesses might have moved that read
up to before the data was even written to the buffer, and another CPU
that cleaned it again, causing the newly dirty state to never actually
hit the disk.
Admittedly this would probably never trigger in practice, but it's still
wrong.
Mikulas sent a patch that fixed the problem, but I dislike the subtlety
of the whole optimization, so this is an alternate fix that is more
explicit about the particular SMP ordering for the optimization, and
separates out the speculative reads of the buffer state into its own
conditional (and makes the memory barrier only happen if we are likely
to actually hit the optimized case in the first place).
I considered removing the optimization entirely, but Andrew argued for
it's continued existence. I'm a push-over.
Cc: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/buffer.c | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/fs/buffer.c b/fs/buffer.c index 98196327ddf0..39ff14403d13 100644 --- a/fs/buffer.c +++ b/fs/buffer.c | |||
@@ -1181,7 +1181,20 @@ __getblk_slow(struct block_device *bdev, sector_t block, int size) | |||
1181 | void mark_buffer_dirty(struct buffer_head *bh) | 1181 | void mark_buffer_dirty(struct buffer_head *bh) |
1182 | { | 1182 | { |
1183 | WARN_ON_ONCE(!buffer_uptodate(bh)); | 1183 | WARN_ON_ONCE(!buffer_uptodate(bh)); |
1184 | if (!buffer_dirty(bh) && !test_set_buffer_dirty(bh)) | 1184 | |
1185 | /* | ||
1186 | * Very *carefully* optimize the it-is-already-dirty case. | ||
1187 | * | ||
1188 | * Don't let the final "is it dirty" escape to before we | ||
1189 | * perhaps modified the buffer. | ||
1190 | */ | ||
1191 | if (buffer_dirty(bh)) { | ||
1192 | smp_mb(); | ||
1193 | if (buffer_dirty(bh)) | ||
1194 | return; | ||
1195 | } | ||
1196 | |||
1197 | if (!test_set_buffer_dirty(bh)) | ||
1185 | __set_page_dirty(bh->b_page, page_mapping(bh->b_page), 0); | 1198 | __set_page_dirty(bh->b_page, page_mapping(bh->b_page), 0); |
1186 | } | 1199 | } |
1187 | 1200 | ||