aboutsummaryrefslogtreecommitdiffstats
path: root/mm/slab.c
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2010-01-27 06:27:40 -0500
committerPekka Enberg <penberg@cs.helsinki.fi>2010-01-30 08:02:39 -0500
commit44b57f1cc72a4a30b31f11b07a927d1534f1b93d (patch)
tree78e38b52bedf86446161e9df0cfbb2b7b4bc6dd8 /mm/slab.c
parent7284ce6c9f6153d1777df5f310c959724d1bd446 (diff)
slab: fix regression in touched logic
When factoring common code into transfer_objects in commit 3ded175 ("slab: add transfer_objects() function"), the 'touched' logic got a bit broken. When refilling from the shared array (taking objects from the shared array), we are making use of the shared array so it should be marked as touched. Subsequently pulling an element from the cpu array and allocating it should also touch the cpu array, but that is taken care of after the alloc_done label. (So yes, the cpu array was getting touched = 1 twice). So revert this logic to how it worked in earlier kernels. This also affects the behaviour in __drain_alien_cache, which would previously 'touch' the shared array and now does not. I think it is more logical not to touch there, because we are pushing objects into the shared array rather than pulling them off. So there is no good reason to postpone reaping them -- if the shared array is getting utilized, then it will get 'touched' in the alloc path (where this patch now restores the touch). Acked-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'mm/slab.c')
-rw-r--r--mm/slab.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/mm/slab.c b/mm/slab.c
index 7451bdacaf18..f9626d51a4b1 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -935,7 +935,6 @@ static int transfer_objects(struct array_cache *to,
935 935
936 from->avail -= nr; 936 from->avail -= nr;
937 to->avail += nr; 937 to->avail += nr;
938 to->touched = 1;
939 return nr; 938 return nr;
940} 939}
941 940
@@ -2963,8 +2962,10 @@ retry:
2963 spin_lock(&l3->list_lock); 2962 spin_lock(&l3->list_lock);
2964 2963
2965 /* See if we can refill from the shared array */ 2964 /* See if we can refill from the shared array */
2966 if (l3->shared && transfer_objects(ac, l3->shared, batchcount)) 2965 if (l3->shared && transfer_objects(ac, l3->shared, batchcount)) {
2966 l3->shared->touched = 1;
2967 goto alloc_done; 2967 goto alloc_done;
2968 }
2968 2969
2969 while (batchcount > 0) { 2970 while (batchcount > 0) {
2970 struct list_head *entry; 2971 struct list_head *entry;