aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/rcu
Commit message (Collapse)AuthorAge
...
| * | rcu: Move rcu_report_exp_rnp() to allow consolidationPaul E. McKenney2015-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a nearly pure code-movement commit, moving rcu_report_exp_rnp(), sync_rcu_preempt_exp_done(), and rcu_preempted_readers_exp() so that later commits can make synchronize_sched_expedited() use them. The non-code-movement portion of this commit tags rcu_report_exp_rnp() as __maybe_unused to avoid build errors when CONFIG_PREEMPT=n. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Use rsp->expedited_wq instead of sync_rcu_preempt_exp_wqPaul E. McKenney2015-09-21
| |/ | | | | | | | | | | | | | | | | | | Now that there is an ->expedited_wq waitqueue in each rcu_state structure, there is no need for the sync_rcu_preempt_exp_wq global variable. This commit therefore substitutes ->expedited_wq for sync_rcu_preempt_exp_wq. It also initializes ->expedited_wq only once at boot instead of at the start of each expedited grace period. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* | rcu: Correct comment for values of ->gp_state fieldPaul E. McKenney2015-10-06
| | | | | | | | | | | | | | | | | | This commit corrects the comment for the values of the ->gp_state field, which previously incorrectly said that these were for the ->gp_flags field. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* | rcu: Finish folding ->fqs_state into ->gp_statePetr Mladek2015-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit commit 4cdfc175c25c89ee ("rcu: Move quiescent-state forcing into kthread") started the process of folding the old ->fqs_state into ->gp_state, but did not complete it. This situation does not cause any malfunction, but can result in extremely confusing trace output. This commit completes this task of eliminating ->fqs_state in favor of ->gp_state. The old ->fqs_state was also used to decide when to collect dyntick-idle snapshots. For this purpose, we add a boolean variable into the kthread, which is set on the first call to rcu_gp_fqs() for a given grace period and clear otherwise. Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* | rcu: Move preemption disabling out of __srcu_read_lock()Paul E. McKenney2015-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, __srcu_read_lock() cannot be invoked from restricted environments because it contains calls to preempt_disable() and preempt_enable(), both of which can invoke lockdep, which is a bad idea in some restricted execution modes. This commit therefore moves the preempt_disable() and preempt_enable() from __srcu_read_lock() to srcu_read_lock(). It also inserts the preempt_disable() and preempt_enable() around the call to __srcu_read_lock() in do_exit(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* | rcu: Add online/offline info to stall warning messagePaul E. McKenney2015-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit makes the RCU CPU stall warning message print online/offline indications immediately after a hyphen following the CPU number. A "O" indicates that the global CPU-hotplug system believes that the CPU is online, a "o" that RCU perceived the CPU to be online at the beginning of the current expedited grace period, and an "N" that RCU currently believes that it will perceive the CPU as being online at the beginning of the next expedited grace period, with "." otherwise for all three indications. So for CPU 10, you would normally see "10-OoN:" indicating that everything believes that the CPU is online. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* | rcu: Eliminate panic when silly boot-time fanout specifiedPaul E. McKenney2015-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit loosens rcutree.rcu_fanout_leaf range checks and replaces a panic() with a fallback to compile-time values. This fallback is accompanied by a WARN_ON(), and both occur when the rcutree.rcu_fanout_leaf value is too small to accommodate the number of CPUs. For example, given the current four-level limit for the rcu_node tree, a system with more than 16 CPUs built with CONFIG_FANOUT=2 must have rcutree.rcu_fanout_leaf larger than 2. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* | rcu: Don't disable preemption for Tiny and Tree RCU readersBoqun Feng2015-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because preempt_disable() maps to barrier() for non-debug builds, it forces the compiler to spill and reload registers. Because Tree RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these barrier() instances generate needless extra code for each instance of rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree RCU and bloats Tiny RCU. This commit therefore removes the preempt_disable() and preempt_enable() from the non-preemptible implementations of __rcu_read_lock() and __rcu_read_unlock(), respectively. However, for debug purposes, preempt_disable() and preempt_enable() are still invoked if CONFIG_PREEMPT_COUNT=y, because this allows detection of sleeping inside atomic sections in non-preemptible kernels. However, Tiny and Tree RCU operates by coalescing all RCU read-side critical sections on a given CPU that lie between successive quiescent states. It is therefore necessary to compensate for removing barriers from __rcu_read_lock() and __rcu_read_unlock() by adding them to a couple of the RCU functions invoked during quiescent states, namely to rcu_all_qs() and rcu_note_context_switch(). However, note that the latter is more paranoia than necessity, at least until link-time optimizations become more aggressive. This is based on an earlier patch by Paul E. McKenney, fixing a bug encountered in kernels built with CONFIG_PREEMPT=n and CONFIG_PREEMPT_COUNT=y. Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* | rcu: Use call_rcu_func_t to replace explicit type equivalentsBoqun Feng2015-10-06
| | | | | | | | | | | | | | | | | | | | | | We have had the call_rcu_func_t typedef for a quite awhile, but we still use explicit function pointer types in some places. These types can confuse cscope and can be hard to read. This patch therefore replaces these types with the call_rcu_func_t typedef. Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* | rcu: Use rcu_callback_t in call_rcu*() and friendsBoqun Feng2015-10-06
|/ | | | | | | | | | | | | | As we now have rcu_callback_t typedefs as the type of rcu callbacks, we should use it in call_rcu*() and friends as the type of parameters. This could save us a few lines of code and make it clear which function requires an rcu callbacks rather than other callbacks as its argument. Besides, this can also help cscope to generate a better database for code reading. Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Suppress lockdep false positive for rcp->exp_funnel_mutexPaul E. McKenney2015-09-21
| | | | | | | | | | | | | | | | | | | In kernels built with CONFIG_PREEMPT=y, synchronize_rcu_expedited() invokes synchronize_sched_expedited() while holding RCU-preempt's root rcu_node structure's ->exp_funnel_mutex, which is acquired after the rcu_data structure's ->exp_funnel_mutex. The first thing that synchronize_sched_expedited() will do is acquire RCU-sched's rcu_data structure's ->exp_funnel_mutex. There is no danger of an actual deadlock because the locking order is always from RCU-preempt's expedited mutexes to those of RCU-sched. Unfortunately, lockdep considers both rcu_data structures' ->exp_funnel_mutex to be in the same lock class and therefore reports a deadlock cycle. This commit silences this false positive by placing RCU-sched's rcu_data structures' ->exp_funnel_mutex locks into their own lock class. Reported-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu,locking: Privatize smp_mb__after_unlock_lock()Paul E. McKenney2015-08-04
| | | | | | | | | | | | RCU is the only thing that uses smp_mb__after_unlock_lock(), and is likely the only thing that ever will use it, so this commit makes this macro private to RCU. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>
*-. Merge branches 'doc.2015.07.15a' and 'torture.2015.07.15a' into HEADPaul E. McKenney2015-08-04
|\ \ | | | | | | | | | | | | doc.2015.07.15a: Documentation updates. torture.2015.07.15a: Torture-test updates.
| | * rcutorture: Add RCU-tasks qualifier to dereferencePaul E. McKenney2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although RCU-tasks isn't really designed to support rcu_dereference() and list manipulation, that is how rcutorture tests it. Which means that lockdep-RCU complains about the rcu_dereference_check() invocations because RCU-tasks doesn't have read-side markers. This commit therefore creates a torturing_tasks() to silence the lockdep-RCU complaints from rcu_dereference_check() when RCU-tasks is being tortured. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | * rcutorture: Fix rcu_torture_cbflood() for callback-free RCUPaul E. McKenney2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rcu_torture_cbflood() function correctly checks for flavors of RCU that lack analogs to call_rcu() and rcu_barrier(), but in that case it fails to terminate correctly. In fact, it terminates so incorrectly that segfaults can result. This commit therefore causes rcu_torture_cbflood() to do the proper wait-for-stop procedure. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | * rcutorture: Bounds-check rcutorture.shuffle_intervalPaul E. McKenney2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | Specifying a negative rcutorture.shuffle_interval value will cause a negative value to be used as a sleep time. This commit therefore refuses to start shuffling unless the rcutorture.shuffle_interval value is greater than zero. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | * rcutorture: Check nfakewriters parameterPaul E. McKenney2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, a negative value for rcutorture.nfakewriters= can cause rcutorture to pass a negative size to the memory allocator, which is not really a particularly good thing to do. This commit therefore adds bounds checking to this parameter, so that values that are less than or equal to zero disable fake writing. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | * rcutorture: Better bounds checking for n_barrier_cbsPaul E. McKenney2015-07-15
| |/ | | | | | | | | | | | | | | A negative value for rcutorture.n_barrier_cbs can pass a negative value to the memory allocator, so this commit instead causes rcu_barrier() testing to be disabled in this case. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* | Merge branches 'fixes.2015.07.22a' and 'initexp.2015.08.04a' into HEADPaul E. McKenney2015-08-04
|\ \ | | | | | | | | | | | | | | | fixes.2015.07.22a: Miscellaneous fixes. initexp.2015.08.04a: Initialization and expedited updates. (Single branch due to conflicts.)
| * | rcu: Silence lockdep false positive for expedited grace periodsPaul E. McKenney2015-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In a CONFIG_PREEMPT=y kernel, synchronize_rcu_expedited() acquires the ->exp_funnel_mutex in rcu_preempt_state, then invokes synchronize_sched_expedited, which acquires the ->exp_funnel_mutex in rcu_sched_state. There can be no deadlock because rcu_preempt_state ->exp_funnel_mutex acquisition always precedes that of rcu_sched_state. But lockdep does not know that, so it gives false-positive splats. This commit therefore associates a separate lock_class_key structure with the rcu_sched_state structure's ->exp_funnel_mutex, allowing lockdep to see the lock ordering, avoiding the false positives. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Add fastpath bypassing funnel lockingPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | In the common case, there will be only one expedited grace period in the system at a given time, in which case it is not helpful to use funnel locking. This commit therefore adds a fastpath that bypasses funnel locking when the root ->exp_funnel_mutex is not held. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Rename RCU_GP_DONE_FQS to RCU_GP_DOING_FQSPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The grace-period kthread sleeps waiting to do a force-quiescent-state scan, and when awakened sets rsp->gp_state to RCU_GP_DONE_FQS. However, this is confusing because the kthread has not done the force-quiescent-state, but is instead just starting to do it. This commit therefore renames RCU_GP_DONE_FQS to RCU_GP_DOING_FQS in order to make things a bit easier on reviewers. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Pull out wait_event*() condition into helper functionPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The condition for the wait_event_interruptible_timeout() that waits to do the next force-quiescent-state scan is a bit ornate: ((gf = READ_ONCE(rsp->gp_flags)) & RCU_GP_FLAG_FQS) || (!READ_ONCE(rnp->qsmask) && !rcu_preempt_blocked_readers_cgp(rnp)) This commit therefore pulls this condition out into a helper function and comments its component conditions. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Add stall warnings to synchronize_sched_expedited()Paul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although synchronize_sched_expedited() historically has no RCU CPU stall warnings, the availability of the rcupdate.rcu_expedited boot parameter invalidates the old assumption that synchronize_sched()'s stall warnings would suffice. This commit therefore adds RCU CPU stall warnings to synchronize_sched_expedited(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Extend expedited funnel locking to rcu_data structurePaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The strictly rcu_node based funnel-locking scheme works well in many cases, but systems with CONFIG_RCU_FANOUT_LEAF=64 won't necessarily get all that much concurrency. This commit therefore extends the funnel locking into the per-CPU rcu_data structure, providing concurrency equal to the number of CPUs. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Consolidate last open-coded expedited memory barrierPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One of the requirements on RCU grace periods is that if there is a causal chain of operations that starts after one grace period and ends before another grace period, then the two grace periods must be serialized. There has been (and might still be) code that relies on this, for example, certain types of reference-counting code that does a call_rcu() within an RCU callback function. This requirement is why there is an smp_mb() at the end of both synchronize_sched_expedited() and synchronize_rcu_expedited(). However, this is the only smp_mb() in these functions, so it would be nicer to consolidate it into rcu_exp_gp_seq_end(). This commit does just that. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Apply rcu_seq operations to _rcu_barrier()Paul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | The rcu_seq operations were open-coded in _rcu_barrier(), so this commit replaces the open-coding with the shiny new rcu_seq operations. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Use funnel locking for synchronize_rcu_expedited()'s polling loopPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | This commit gets rid of synchronize_rcu_expedited()'s mutex_trylock() polling loop in favor of the funnel-locking scheme that was abstracted from synchronize_sched_expedited(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Fix synchronize_sched_expedited() type error for "s"Paul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | The type of "s" has been "long" rather than the correct "unsigned long" for quite some time. This commit fixes this type error. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Abstract funnel locking from synchronize_sched_expedited()Paul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | This commit abstracts funnel locking from synchronize_sched_expedited() so that it may be used by synchronize_rcu_expedited(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Make synchronize_rcu_expedited() use sequence-counter schemePaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although synchronize_rcu_expedited() uses a sequence-counter scheme, it is based on a single increment per grace period, which means that tasks piggybacking off of concurrent grace periods may be forced to wait longer than necessary. This commit therefore applies the new sequence-count functions developed for synchronize_sched_expedited() to speed things up a bit and to consolidate the sequence-counter implementation. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Abstract sequence counting from synchronize_sched_expedited()Paul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit creates rcu_exp_gp_seq_start() and rcu_exp_gp_seq_end() to bracket an expedited grace period, rcu_exp_gp_seq_snap() to snapshot the sequence counter, and rcu_exp_gp_seq_done() to check to see if a full expedited grace period has elapsed since the snapshot. These will be applied to synchronize_rcu_expedited(). These are defined in terms of underlying rcu_seq_start(), rcu_seq_end(), rcu_seq_snap(), rcu_seq_done(), which will be applied to _rcu_barrier(). One reason that this commit doesn't use the seqcount primitives themselves is that the smp_wmb() in those primitive is insufficient due to the fact that expedited grace periods do reads as well as writes. In addition, the read-side seqcount primitives detect a potentially partial change, where the expedited primitives instead need a guaranteed full change. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Make expedited GP CPU stoppage asynchronousPeter Zijlstra2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sequentially stopping the CPUs slows down expedited grace periods by at least a factor of two, based on rcutorture's grace-period-per-second rate. This is a conservative measure because rcutorture uses unusually long RCU read-side critical sections and because rcutorture periodically quiesces the system in order to test RCU's ability to ramp down to and up from the idle state. This commit therefore replaces the stop_one_cpu() with stop_one_cpu_nowait(), using an atomic-counter scheme to determine when all CPUs have passed through the stopped state. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Get rid of synchronize_sched_expedited()'s polling loopPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit gets rid of synchronize_sched_expedited()'s mutex_trylock() polling loop in favor of a funnel-locking scheme based on the rcu_node tree. The work-done check is done at each level of the tree, allowing high-contention situations to be resolved quickly with reasonable levels of mutex contention. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Rework synchronize_sched_expedited() counter handlingPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that synchronize_sched_expedited() have a mutex, it can use simpler work-already-done detection scheme. This commit simplifies this scheme by using something similar to the sequence-locking counter scheme. A counter is incremented before and after each grace period, so that the counter is odd in the midst of the grace period and even otherwise. So if the counter has advanced to the second even number that is greater than or equal to the snapshot, the required grace period has already happened. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Switch synchronize_sched_expedited() to stop_one_cpu()Peter Zijlstra2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The synchronize_sched_expedited() currently invokes try_stop_cpus(), which schedules the stopper kthreads on each online non-idle CPU, and waits until all those kthreads are running before letting any of them stop. This is disastrous for real-time workloads, which get hit with a preemption that is as long as the longest scheduling latency on any CPU, including any non-realtime housekeeping CPUs. This commit therefore switches to using stop_one_cpu() on each CPU in turn. This avoids inflicting the worst-case scheduling latency on the worst-case CPU onto all other CPUs, and also simplifies the code a little bit. Follow-up commits will simplify the counter-snapshotting algorithm and convert a number of the counters that are now protected by the new ->expedited_mutex to non-atomic. Signed-off-by: Peter Zijlstra <peterz@infradead.org> [ paulmck: Kept stop_one_cpu(), dropped disabling of "guardrails". ] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Remove CONFIG_RCU_CPU_STALL_INFOPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | The CONFIG_RCU_CPU_STALL_INFO has been default-y for a couple of releases with no complaints, so it is time to eliminate this Kconfig option entirely, so that the long-form RCU CPU stall warnings cannot be disabled. This commit does just that. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Stop disabling CPU hotplug in synchronize_rcu_expedited()Paul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | The fact that tasks could be migrated from leaf to root rcu_node structures meant that synchronize_rcu_expedited() had to disable CPU hotplug. However, tasks now stay put, so this commit removes the CPU-hotplug disabling from synchronize_rcu_expedited(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Reset rcu_fanout_leaf if out of boundsPaul E. McKenney2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently if the rcu_fanout_leaf boot parameter is out of bounds (that is, less than RCU_FANOUT_LEAF or greater than the number of bits in an unsigned long), a warning is issued and execution continues with the out-of-bounds value. This can result in all manner of failures, so this patch resets rcu_fanout_leaf to RCU_FANOUT_LEAF when an out-of-bounds condition is detected. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Shut up bogus gcc array bounds warningAlexander Gordeev2015-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because gcc does not realize a loop would not be entered ever (i.e. in case of rcu_num_lvls == 1): for (i = 1; i < rcu_num_lvls; i++) rsp->level[i] = rsp->level[i - 1] + levelcnt[i - 1]; some compiler (pre- 5.x?) versions give a bogus warning: kernel/rcu/tree.c: In function ‘rcu_init_one.isra.55’: kernel/rcu/tree.c:4108:13: warning: array subscript is above array bounds [-Warray-bounds] rsp->level[i] = rsp->level[i - 1] + rsp->levelcnt[i - 1]; ^ Fix that warning by adding an extra item to rcu_state::level[] array. Once the bogus warning is fixed in gcc and kernel drops support of older versions, the dummy item may be removed from the array. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Suggested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Simplify arithmetic to calculate number of RCU nodesAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This update makes arithmetic to calculate number of RCU nodes more straight and easy to read. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Limit count of static data to the number of RCU levelsAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although a number of RCU levels may be less than the current maximum of four, some static data associated with each level are allocated for all four levels. As result, the extra data never get accessed and just wast memory. This update limits count of allocated items to the number of used RCU levels. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Remove unnecessary fields from rcu_state structureAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Members rcu_state::levelcnt[] and rcu_state::levelspread[] are only used at init. There is no reason to keep them afterwards. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Limit rcu_capacity[] size to RCU_NUM_LVLS itemsAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Number of items in rcu_capacity[] array is defined by macro MAX_RCU_LVLS. However, that array is never accessed beyond RCU_NUM_LVLS index. Therefore, we can limit the array to RCU_NUM_LVLS items and eliminate MAX_RCU_LVLS. As result, in most cases the memory is conserved. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Limit rcu_state::levelcnt[] to RCU_NUM_LVLS itemsAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Variable rcu_num_lvls is limited by RCU_NUM_LVLS macro. In turn, rcu_state::levelcnt[] array is never accessed beyond rcu_num_lvls. Thus, rcu_state::levelcnt[] is safe to limit to RCU_NUM_LVLS items. Since rcu_num_lvls could be changed during boot (as result of rcutree.rcu_fanout_leaf kernel parameter update) one might assume a new value could overflow the value of RCU_NUM_LVLS. However, that is not the case, since leaf-level fanout is only permitted to increase, resulting in rcu_num_lvls possibly to decrease. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Simplify rcu_init_geometry() capacity arithmeticsAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current code suggests that introducing the extra level to rcu_capacity[] array makes some of the arithmetic easier. Well, in fact it appears rather confusing and unnecessary. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Cleanup rcu_init_geometry() code and arithmeticsAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This update simplifies rcu_init_geometry() code flow and makes calculation of the total number of rcu_node structures more easy to read. The update relies on the fact num_rcu_lvl[] is never accessed beyond rcu_num_lvls index by the rest of the code. Therefore, there is no need initialize the whole num_rcu_lvl[]. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Remove superfluous local variable in rcu_init_geometry()Alexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Local variable 'n' mimics 'nr_cpu_ids' while the both are used within one function. There is no reason for 'n' to exist whatsoever. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Panic if RCU tree can not accommodate all CPUsAlexander Gordeev2015-07-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently a condition when RCU tree is unable to accommodate the configured number of CPUs is not permitted and causes a fall back to compile-time values. However, the code has no means to exceed the RCU tree capacity neither at compile-time nor in run-time. Therefore, if the condition is met in run- time then it indicates a serios problem elsewhere and should be handled with a panic. Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Provide more diagnostics for stalled GP kthreadPaul E. McKenney2015-07-15
| |/ | | | | | | Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>