aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorMichel Lespinasse <walken@google.com>2012-10-08 19:31:17 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2012-10-09 03:22:37 -0400
commit14b94af0b251a2c80885b60538166fb7d04a642e (patch)
treeef447d340435c441f8c3e54eb8f26f747aa73108 /Documentation
parentdadf93534f125b9eda486b471446a8456a603d27 (diff)
rbtree: faster augmented rbtree manipulation
Introduce new augmented rbtree APIs that allow minimal recalculation of augmented node information. A new callback is added to the rbtree insertion and erase rebalancing functions, to be called on each tree rotations. Such rotations preserve the subtree's root augmented value, but require recalculation of the one child that was previously located at the subtree root. In the insertion case, the handcoded search phase must be updated to maintain the augmented information on insertion, and then the rbtree coloring/rebalancing algorithms keep it up to date. In the erase case, things are more complicated since it is library code that manipulates the rbtree in order to remove internal nodes. This requires a couple additional callbacks to copy a subtree's augmented value when a new root is stitched in, and to recompute augmented values down the ancestry path when a node is removed from the tree. In order to preserve maximum speed for the non-augmented case, we provide two versions of each tree manipulation function. rb_insert_augmented() is the augmented equivalent of rb_insert_color(), and rb_erase_augmented() is the augmented equivalent of rb_erase(). Signed-off-by: Michel Lespinasse <walken@google.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/rbtree.txt190
1 files changed, 157 insertions, 33 deletions
diff --git a/Documentation/rbtree.txt b/Documentation/rbtree.txt
index 8d32d85a5234..0a0b6dce3e08 100644
--- a/Documentation/rbtree.txt
+++ b/Documentation/rbtree.txt
@@ -193,24 +193,42 @@ Example:
193Support for Augmented rbtrees 193Support for Augmented rbtrees
194----------------------------- 194-----------------------------
195 195
196Augmented rbtree is an rbtree with "some" additional data stored in each node. 196Augmented rbtree is an rbtree with "some" additional data stored in
197This data can be used to augment some new functionality to rbtree. 197each node, where the additional data for node N must be a function of
198Augmented rbtree is an optional feature built on top of basic rbtree 198the contents of all nodes in the subtree rooted at N. This data can
199infrastructure. An rbtree user who wants this feature will have to call the 199be used to augment some new functionality to rbtree. Augmented rbtree
200augmentation functions with the user provided augmentation callback 200is an optional feature built on top of basic rbtree infrastructure.
201when inserting and erasing nodes. 201An rbtree user who wants this feature will have to call the augmentation
202functions with the user provided augmentation callback when inserting
203and erasing nodes.
202 204
203On insertion, the user must call rb_augment_insert() once the new node is in 205On insertion, the user must update the augmented information on the path
204place. This will cause the augmentation function callback to be called for 206leading to the inserted node, then call rb_link_node() as usual and
205each node between the new node and the root which has been affected by the 207rb_augment_inserted() instead of the usual rb_insert_color() call.
206insertion. 208If rb_augment_inserted() rebalances the rbtree, it will callback into
209a user provided function to update the augmented information on the
210affected subtrees.
207 211
208When erasing a node, the user must call rb_augment_erase_begin() first to 212When erasing a node, the user must call rb_erase_augmented() instead of
209retrieve the deepest node on the rebalance path. Then, after erasing the 213rb_erase(). rb_erase_augmented() calls back into user provided functions
210original node, the user must call rb_augment_erase_end() with the deepest 214to updated the augmented information on affected subtrees.
211node found earlier. This will cause the augmentation function to be called
212for each affected node between the deepest node and the root.
213 215
216In both cases, the callbacks are provided through struct rb_augment_callbacks.
2173 callbacks must be defined:
218
219- A propagation callback, which updates the augmented value for a given
220 node and its ancestors, up to a given stop point (or NULL to update
221 all the way to the root).
222
223- A copy callback, which copies the augmented value for a given subtree
224 to a newly assigned subtree root.
225
226- A tree rotation callback, which copies the augmented value for a given
227 subtree to a newly assigned subtree root AND recomputes the augmented
228 information for the former subtree root.
229
230
231Sample usage:
214 232
215Interval tree is an example of augmented rb tree. Reference - 233Interval tree is an example of augmented rb tree. Reference -
216"Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein. 234"Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein.
@@ -230,26 +248,132 @@ and its immediate children. And this will be used in O(log n) lookup
230for lowest match (lowest start address among all possible matches) 248for lowest match (lowest start address among all possible matches)
231with something like: 249with something like:
232 250
233find_lowest_match(lo, hi, node) 251struct interval_tree_node *
252interval_tree_first_match(struct rb_root *root,
253 unsigned long start, unsigned long last)
234{ 254{
235 lowest_match = NULL; 255 struct interval_tree_node *node;
236 while (node) { 256
237 if (max_hi(node->left) > lo) { 257 if (!root->rb_node)
238 // Lowest overlap if any must be on left side 258 return NULL;
239 node = node->left; 259 node = rb_entry(root->rb_node, struct interval_tree_node, rb);
240 } else if (overlap(lo, hi, node)) { 260
241 lowest_match = node; 261 while (true) {
242 break; 262 if (node->rb.rb_left) {
243 } else if (lo > node->lo) { 263 struct interval_tree_node *left =
244 // Lowest overlap if any must be on right side 264 rb_entry(node->rb.rb_left,
245 node = node->right; 265 struct interval_tree_node, rb);
246 } else { 266 if (left->__subtree_last >= start) {
247 break; 267 /*
268 * Some nodes in left subtree satisfy Cond2.
269 * Iterate to find the leftmost such node N.
270 * If it also satisfies Cond1, that's the match
271 * we are looking for. Otherwise, there is no
272 * matching interval as nodes to the right of N
273 * can't satisfy Cond1 either.
274 */
275 node = left;
276 continue;
277 }
248 } 278 }
279 if (node->start <= last) { /* Cond1 */
280 if (node->last >= start) /* Cond2 */
281 return node; /* node is leftmost match */
282 if (node->rb.rb_right) {
283 node = rb_entry(node->rb.rb_right,
284 struct interval_tree_node, rb);
285 if (node->__subtree_last >= start)
286 continue;
287 }
288 }
289 return NULL; /* No match */
290 }
291}
292
293Insertion/removal are defined using the following augmented callbacks:
294
295static inline unsigned long
296compute_subtree_last(struct interval_tree_node *node)
297{
298 unsigned long max = node->last, subtree_last;
299 if (node->rb.rb_left) {
300 subtree_last = rb_entry(node->rb.rb_left,
301 struct interval_tree_node, rb)->__subtree_last;
302 if (max < subtree_last)
303 max = subtree_last;
304 }
305 if (node->rb.rb_right) {
306 subtree_last = rb_entry(node->rb.rb_right,
307 struct interval_tree_node, rb)->__subtree_last;
308 if (max < subtree_last)
309 max = subtree_last;
310 }
311 return max;
312}
313
314static void augment_propagate(struct rb_node *rb, struct rb_node *stop)
315{
316 while (rb != stop) {
317 struct interval_tree_node *node =
318 rb_entry(rb, struct interval_tree_node, rb);
319 unsigned long subtree_last = compute_subtree_last(node);
320 if (node->__subtree_last == subtree_last)
321 break;
322 node->__subtree_last = subtree_last;
323 rb = rb_parent(&node->rb);
324 }
325}
326
327static void augment_copy(struct rb_node *rb_old, struct rb_node *rb_new)
328{
329 struct interval_tree_node *old =
330 rb_entry(rb_old, struct interval_tree_node, rb);
331 struct interval_tree_node *new =
332 rb_entry(rb_new, struct interval_tree_node, rb);
333
334 new->__subtree_last = old->__subtree_last;
335}
336
337static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
338{
339 struct interval_tree_node *old =
340 rb_entry(rb_old, struct interval_tree_node, rb);
341 struct interval_tree_node *new =
342 rb_entry(rb_new, struct interval_tree_node, rb);
343
344 new->__subtree_last = old->__subtree_last;
345 old->__subtree_last = compute_subtree_last(old);
346}
347
348static const struct rb_augment_callbacks augment_callbacks = {
349 augment_propagate, augment_copy, augment_rotate
350};
351
352void interval_tree_insert(struct interval_tree_node *node,
353 struct rb_root *root)
354{
355 struct rb_node **link = &root->rb_node, *rb_parent = NULL;
356 unsigned long start = node->start, last = node->last;
357 struct interval_tree_node *parent;
358
359 while (*link) {
360 rb_parent = *link;
361 parent = rb_entry(rb_parent, struct interval_tree_node, rb);
362 if (parent->__subtree_last < last)
363 parent->__subtree_last = last;
364 if (start < parent->start)
365 link = &parent->rb.rb_left;
366 else
367 link = &parent->rb.rb_right;
249 } 368 }
250 return lowest_match; 369
370 node->__subtree_last = last;
371 rb_link_node(&node->rb, rb_parent, link);
372 rb_insert_augmented(&node->rb, root, &augment_callbacks);
251} 373}
252 374
253Finding exact match will be to first find lowest match and then to follow 375void interval_tree_remove(struct interval_tree_node *node,
254successor nodes looking for exact match, until the start of a node is beyond 376 struct rb_root *root)
255the hi value we are looking for. 377{
378 rb_erase_augmented(&node->rb, root, &augment_callbacks);
379}