aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* sched: add RT-balance cpu-weightGregory Haskins2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some RT tasks (particularly kthreads) are bound to one specific CPU. It is fairly common for two or more bound tasks to get queued up at the same time. Consider, for instance, softirq_timer and softirq_sched. A timer goes off in an ISR which schedules softirq_thread to run at RT50. Then the timer handler determines that it's time to smp-rebalance the system so it schedules softirq_sched to run. So we are in a situation where we have two RT50 tasks queued, and the system will go into rt-overload condition to request other CPUs for help. This causes two problems in the current code: 1) If a high-priority bound task and a low-priority unbounded task queue up behind the running task, we will fail to ever relocate the unbounded task because we terminate the search on the first unmovable task. 2) We spend precious futile cycles in the fast-path trying to pull overloaded tasks over. It is therefore optimial to strive to avoid the overhead all together if we can cheaply detect the condition before overload even occurs. This patch tries to achieve this optimization by utilizing the hamming weight of the task->cpus_allowed mask. A weight of 1 indicates that the task cannot be migrated. We will then utilize this information to skip non-migratable tasks and to eliminate uncessary rebalance attempts. We introduce a per-rq variable to count the number of migratable tasks that are currently running. We only go into overload if we have more than one rt task, AND at least one of them is migratable. In addition, we introduce a per-task variable to cache the cpus_allowed weight, since the hamming calculation is probably relatively expensive. We only update the cached value when the mask is updated which should be relatively infrequent, especially compared to scheduling frequency in the fast path. Signed-off-by: Gregory Haskins <ghaskins@novell.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: disable standard balancer for RT tasksSteven Rostedt2008-01-25
| | | | | | | | | | Since we now take an active approach to load balancing, we don't need to balance RT tasks via the normal task balancer. In fact, this code was found to pull RT tasks away from CPUS that the active movement performed, resulting in large latencies. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: push RT tasks from overloaded CPUsSteven Rostedt2008-01-25
| | | | | | | | | | This patch adds pushing of overloaded RT tasks from a runqueue that is having tasks (most likely RT tasks) added to the run queue. TODO: We don't cover the case of waking of new RT tasks (yet). Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: pull RT tasks from overloaded runqueuesSteven Rostedt2008-01-25
| | | | | | | | | | | | | This patch adds the algorithm to pull tasks from RT overloaded runqueues. When a pull RT is initiated, all overloaded runqueues are examined for a RT task that is higher in prio than the highest prio task queued on the target runqueue. If another runqueue holds a RT task that is of higher prio than the highest prio task on the target runqueue is found it is pulled to the target runqueue. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: add rt-overload trackingSteven Rostedt2008-01-25
| | | | | | | | | This patch adds an RT overload accounting system. When a runqueue has more than one RT task queued, it is marked as overloaded. That is that it is a candidate to have RT tasks pulled from it. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: add RT task pushingSteven Rostedt2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds an algorithm to push extra RT tasks off a run queue to other CPU runqueues. When more than one RT task is added to a run queue, this algorithm takes an assertive approach to push the RT tasks that are not running onto other run queues that have lower priority. The way this works is that the highest RT task that is not running is looked at and we examine the runqueues on the CPUS for that tasks affinity mask. We find the runqueue with the lowest prio in the CPU affinity of the picked task, and if it is lower in prio than the picked task, we push the task onto that CPU runqueue. We continue pushing RT tasks off the current runqueue until we don't push any more. The algorithm stops when the next highest RT task can't preempt any other processes on other CPUS. TODO: The algorithm may stop when there are still RT tasks that can be migrated. Specifically, if the highest non running RT task CPU affinity is restricted to CPUs that are running higher priority tasks, there may be a lower priority task queued that has an affinity with a CPU that is running a lower priority task that it could be migrated to. This patch set does not address this issue. Note: checkpatch reveals two over 80 character instances. I'm not sure that breaking them up will help visually, so I left them as is. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: track highest prio task queuedSteven Rostedt2008-01-25
| | | | | | | | | | | | This patch adds accounting to each runqueue to keep track of the highest prio task queued on the run queue. We only care about RT tasks, so if the run queue does not contain any active RT tasks its priority will be considered MAX_RT_PRIO. This information will be used for later patches. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: count # of queued RT tasksSteven Rostedt2008-01-25
| | | | | | | | This patch adds accounting to keep track of the number of RT tasks running on a runqueue. This information will be used in later patches. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* softlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasksIngo Molnar2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this patch extends the soft-lockup detector to automatically detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are printed the following way: ------------------> INFO: task prctl:3042 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message prctl D fd5e3793 0 3042 2997 f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286 f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000 f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b Call Trace: [<c04883a5>] schedule_timeout+0x6d/0x8b [<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17 [<c0133a76>] msleep+0x10/0x16 [<c0138974>] sys_prctl+0x30/0x1e2 [<c0104c52>] sysenter_past_esp+0x5f/0xa5 ======================= 2 locks held by prctl/3042: #0: (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a #1: (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9 <------------------ the current default timeout is 120 seconds. Such messages are printed up to 10 times per bootup. If the system has crashed already then the messages are not printed. if lockdep is enabled then all held locks are printed as well. this feature is a natural extension to the softlockup-detector (kernel locked up without scheduling) and to the NMI watchdog (kernel locked up with IRQs disabled). [ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ] [ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
* cpu-hotplug: fix build on !CONFIG_SMPIngo Molnar2008-01-25
| | | | | | fix build on !CONFIG_SMP. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* cpu-hotplug: replace per-subsystem mutexes with get_online_cpus()Gautham R Shenoy2008-01-25
| | | | | | | | | This patch converts the known per-subsystem mutexes to get_online_cpus put_online_cpus. It also eliminates the CPU_LOCK_ACQUIRE and CPU_LOCK_RELEASE hotplug notification events. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus()Gautham R Shenoy2008-01-25
| | | | | | | | | | | | | | | | | Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use get_online_cpus and put_online_cpus instead as it highlights the refcount semantics in these operations. The new API guarantees protection against the cpu-hotplug operation, but it doesn't guarantee serialized access to any of the local data structures. Hence the changes needs to be reviewed. In case of pseries_add_processor/pseries_remove_processor, use cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the cpu_present_map there. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* cpu-hotplug: refcount based cpu hotplugGautham R Shenoy2008-01-25
| | | | | | | | | | | | | | | | This patch implements a Refcount + Waitqueue based model for cpu-hotplug. Now, a thread which wants to prevent cpu-hotplug, will bump up a global refcount and the thread which wants to perform a cpu-hotplug operation will block till the global refcount goes to zero. The readers, if any, during an ongoing cpu-hotplug operation are blocked until the cpu-hotplug operation is over. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Paul Jackson <pj@sgi.com> [For !CONFIG_HOTPLUG_CPU ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: group scheduler, fix fairness of cpu bandwidth allocation for task groupsSrivatsa Vaddagiri2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current load balancing scheme isn't good enough for precise group fairness. For example: on a 8-cpu system, I created 3 groups as under: a = 8 tasks (cpu.shares = 1024) b = 4 tasks (cpu.shares = 1024) c = 3 tasks (cpu.shares = 1024) a, b and c are task groups that have equal weight. We would expect each of the groups to receive 33.33% of cpu bandwidth under a fair scheduler. This is what I get with the latest scheduler git tree: Signed-off-by: Ingo Molnar <mingo@elte.hu> -------------------------------------------------------------------------------- Col1 | Col2 | Col3 | Col4 ------|---------|-------|------------------------------------------------------- a | 277.676 | 57.8% | 54.1% 54.1% 54.1% 54.2% 56.7% 62.2% 62.8% 64.5% b | 116.108 | 24.2% | 47.4% 48.1% 48.7% 49.3% c | 86.326 | 18.0% | 47.5% 47.9% 48.5% -------------------------------------------------------------------------------- Explanation of o/p: Col1 -> Group name Col2 -> Cumulative execution time (in seconds) received by all tasks of that group in a 60sec window across 8 cpus Col3 -> CPU bandwidth received by the group in the 60sec window, expressed in percentage. Col3 data is derived as: Col3 = 100 * Col2 / (NR_CPUS * 60) Col4 -> CPU bandwidth received by each individual task of the group. Col4 = 100 * cpu_time_recd_by_task / 60 [I can share the test case that produces a similar o/p if reqd] The deviation from desired group fairness is as below: a = +24.47% b = -9.13% c = -15.33% which is quite high. After the patch below is applied, here are the results: -------------------------------------------------------------------------------- Col1 | Col2 | Col3 | Col4 ------|---------|-------|------------------------------------------------------- a | 163.112 | 34.0% | 33.2% 33.4% 33.5% 33.5% 33.7% 34.4% 34.8% 35.3% b | 156.220 | 32.5% | 63.3% 64.5% 66.1% 66.5% c | 160.653 | 33.5% | 85.8% 90.6% 91.4% -------------------------------------------------------------------------------- Deviation from desired group fairness is as below: a = +0.67% b = -0.83% c = +0.17% which is far better IMO. Most of other runs have yielded a deviation within +-2% at the most, which is good. Why do we see bad (group) fairness with current scheuler? ========================================================= Currently cpu's weight is just the summation of individual task weights. This can yield incorrect results. For ex: consider three groups as below on a 2-cpu system: CPU0 CPU1 --------------------------- A (10) B(5) C(5) --------------------------- Group A has 10 tasks, all on CPU0, Group B and C have 5 tasks each all of which are on CPU1. Each task has the same weight (NICE_0_LOAD = 1024). The current scheme would yield a cpu weight of 10240 (10*1024) for each cpu and the load balancer will think both CPUs are perfectly balanced and won't move around any tasks. This, however, would yield this bandwidth: A = 50% B = 25% C = 25% which is not the desired result. What's changing in the patch? ============================= - How cpu weights are calculated when CONFIF_FAIR_GROUP_SCHED is defined (see below) - API Change - Two tunables introduced in sysfs (under SCHED_DEBUG) to control the frequency at which the load balance monitor thread runs. The basic change made in this patch is how cpu weight (rq->load.weight) is calculated. Its now calculated as the summation of group weights on a cpu, rather than summation of task weights. Weight exerted by a group on a cpu is dependent on the shares allocated to it and also the number of tasks the group has on that cpu compared to the total number of (runnable) tasks the group has in the system. Let, W(K,i) = Weight of group K on cpu i T(K,i) = Task load present in group K's cfs_rq on cpu i T(K) = Total task load of group K across various cpus S(K) = Shares allocated to group K NRCPUS = Number of online cpus in the scheduler domain to which group K is assigned. Then, W(K,i) = S(K) * NRCPUS * T(K,i) / T(K) A load balance monitor thread is created at bootup, which periodically runs and adjusts group's weight on each cpu. To avoid its overhead, two min/max tunables are introduced (under SCHED_DEBUG) to control the rate at which it runs. Fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl> - don't start the load_balance_monitor when there is only a single cpu. - rename the kthread because its currently longer than TASK_COMM_LEN Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: introduce a mutex and corresponding API to serialize access to ↵Srivatsa Vaddagiri2008-01-25
| | | | | | | | | | | | | | | | | | | | | doms_curarray doms_cur[] array represents various scheduling domains which are mutually exclusive. Currently cpusets code can modify this array (by calling partition_sched_domains()) as a result of user modifying sched_load_balance flag for various cpusets. This patch introduces a mutex and corresponding API (only when CONFIG_FAIR_GROUP_SCHED is defined) which allows a reader to safely read the doms_cur[] array w/o worrying abt concurrent modifications to the array. The fair group scheduler code (introduced in next patch of this series) makes use of this mutex to walk thr' doms_cur[] array while rebalancing shares of task groups across cpus. Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: group scheduling, change how cpu load is calculatedSrivatsa Vaddagiri2008-01-25
| | | | | | | | | | | | | | This patch changes how the cpu load exerted by fair_sched_class tasks is calculated. Load exerted by fair_sched_class tasks on a cpu is now a summation of the group weights, rather than summation of task weights. Weight exerted by a group on a cpu is dependent on the shares allocated to it. This version of patch has a minor impact on code size, but should have no runtime/functional impact for !CONFIG_FAIR_GROUP_SCHED. Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: group scheduling, minor fixesSrivatsa Vaddagiri2008-01-25
| | | | | | | | | | | | | | Minor bug fixes for the group scheduler: - Use a mutex to serialize add/remove of task groups and also when changing shares of a task group. Use the same mutex when printing cfs_rq debugging stats for various task groups. - Use list_for_each_entry_rcu in for_each_leaf_cfs_rq macro (when walking task group list) Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: group scheduling code cleanupSrivatsa Vaddagiri2008-01-25
| | | | | | | | | | Minor cleanups: - Fix coding style - remove obsolete comment Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove printk_clock references from ia64Ingo Molnar2008-01-25
| | | | | | remove remaining printk_clock references from ia64. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: remove printk_clock()Ingo Molnar2008-01-25
| | | | | | printk_clock() is obsolete - it has been replaced with cpu_clock(). Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched: fix CONFIG_PRINT_TIME's reliance on sched_clock()Ingo Molnar2008-01-25
| | | | | | | | | | | | | Stefano Brivio reported weird printk timestamp behavior during CPU frequency changes: http://bugzilla.kernel.org/show_bug.cgi?id=9475 fix CONFIG_PRINT_TIME's reliance on sched_clock() and use cpu_clock() instead. Reported-and-bisected-by: Stefano Brivio <stefano.brivio@polimi.it> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* printk: make printk more robust by not allowing recursionIngo Molnar2008-01-25
| | | | | | | | | | | | | | | | | make printk more robust by allowing recursion only if there's a crash going on. Also add recursion detection. I've tested it with an artificially injected printk recursion - instead of a lockup or spontaneous reboot or other crash, the output was a well controlled: [ 41.057335] SysRq : <2>BUG: recent printk recursion! [ 41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full kIll saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks Unmount shoW-blocked-tasks also do all this printk-debug logic with irqs disabled. Signed-off-by: Ingo Molnar <mingo@elte.hu> Reviewed-by: Nick Piggin <npiggin@suse.de>
* Merge branch 'for-linus' of ↵Linus Torvalds2008-01-25
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6: selinux: make mls_compute_sid always polyinstantiate security/selinux: constify function pointer tables and fields security: add a secctx_to_secid() hook security: call security_file_permission from rw_verify_area security: remove security_sb_post_mountroot hook Security: remove security.h include from mm.h Security: remove security_file_mmap hook sparse-warnings (NULL as 0). Security: add get, set, and cloning of superblock security information security/selinux: Add missing "space"
| * selinux: make mls_compute_sid always polyinstantiateEamon Walsh2008-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the requirement that the new and related object types differ in order to polyinstantiate by MLS level. This allows MLS polyinstantiation to occur in the absence of explicit type_member rules or when the type has not changed. Potential users of this support include pam_namespace.so (directory polyinstantiation) and the SELinux X support (property polyinstantiation). Signed-off-by: Eamon Walsh <ewalsh@tycho.nsa.gov> Acked-by: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by: James Morris <jmorris@namei.org>
| * security/selinux: constify function pointer tables and fieldsJan Engelhardt2008-01-24
| | | | | | | | | | | | | | Constify function pointer tables and fields. Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de> Signed-off-by: James Morris <jmorris@namei.org>
| * security: add a secctx_to_secid() hookDavid Howells2008-01-24
| | | | | | | | | | | | | | | | | | | | Add a secctx_to_secid() LSM hook to go along with the existing secid_to_secctx() LSM hook. This patch also includes the SELinux implementation for this hook. Signed-off-by: Paul Moore <paul.moore@hp.com> Acked-by: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by: James Morris <jmorris@namei.org>
| * security: call security_file_permission from rw_verify_areaJames Morris2008-01-24
| | | | | | | | | | | | | | | | All instances of rw_verify_area() are followed by a call to security_file_permission(), so just call the latter from the former. Acked-by: Eric Paris <eparis@redhat.com> Signed-off-by: James Morris <jmorris@namei.org>
| * security: remove security_sb_post_mountroot hookH. Peter Anvin2008-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The security_sb_post_mountroot() hook is long-since obsolete, and is fundamentally broken: it is never invoked if someone uses initramfs. This is particularly damaging, because the existence of this hook has been used as motivation for not using initramfs. Stephen Smalley confirmed on 2007-07-19 that this hook was originally used by SELinux but can now be safely removed: http://marc.info/?l=linux-kernel&m=118485683612916&w=2 Cc: Stephen Smalley <sds@tycho.nsa.gov> Cc: James Morris <jmorris@namei.org> Cc: Eric Paris <eparis@parisplace.org> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: James Morris <jmorris@namei.org>
| * Security: remove security.h include from mm.hJames Morris2008-01-24
| | | | | | | | | | | | | | | | | | Remove security.h include from mm.h, as it is only needed for a single extern declaration, and pulls in all kinds of crud. Fine-by-me: David Chinner <dgc@sgi.com> Acked-by: Eric Paris <eparis@redhat.com> Signed-off-by: James Morris <jmorris@namei.org>
| * Security: remove security_file_mmap hook sparse-warnings (NULL as 0).Richard Knutsson2008-01-24
| | | | | | | | | | | | | | | | | | | | | | Fixing: CHECK mm/mmap.c mm/mmap.c:1623:29: warning: Using plain integer as NULL pointer mm/mmap.c:1623:29: warning: Using plain integer as NULL pointer mm/mmap.c:1944:29: warning: Using plain integer as NULL pointer Signed-off-by: Richard Knutsson <ricknu-0@student.ltu.se> Signed-off-by: James Morris <jmorris@namei.org>
| * Security: add get, set, and cloning of superblock security informationEric Paris2008-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds security_get_sb_mnt_opts, security_set_sb_mnt_opts, and security_clont_sb_mnt_opts to the LSM and to SELinux. This will allow filesystems to directly own and control all of their mount options if they so choose. This interface deals only with option identifiers and strings so it should generic enough for any LSM which may come in the future. Filesystems which pass text mount data around in the kernel (almost all of them) need not currently make use of this interface when dealing with SELinux since it will still parse those strings as it always has. I assume future LSM's would do the same. NFS is the primary FS which does not use text mount data and thus must make use of this interface. An LSM would need to implement these functions only if they had mount time options, such as selinux has context= or fscontext=. If the LSM has no mount time options they could simply not implement and let the dummy ops take care of things. An LSM other than SELinux would need to define new option numbers in security.h and any FS which decides to own there own security options would need to be patched to use this new interface for every possible LSM. This is because it was stated to me very clearly that LSM's should not attempt to understand FS mount data and the burdon to understand security should be in the FS which owns the options. Signed-off-by: Eric Paris <eparis@redhat.com> Acked-by: Stephen D. Smalley <sds@tycho.nsa.gov> Signed-off-by: James Morris <jmorris@namei.org>
| * security/selinux: Add missing "space"Joe Perches2008-01-24
| | | | | | | | | | | | | | Add missing space. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: James Morris <jmorris@namei.org>
* | Merge branch 'for-linus' of ↵Linus Torvalds2008-01-25
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hskinnemoen/avr32-2.6: [AVR32] extint: Set initial irq type to low level [AVR32] extint: change set_irq_type() handling [AVR32] NMI debugging [AVR32] constify function pointer tables [AVR32] ATNGW100: Update defconfig [AVR32] ATSTK1002: Update defconfig [AVR32] Kconfig: Choose daughterboard instead of CPU [AVR32] Add support for ATSTK1003 and ATSTK1004 [AVR32] Clean up external DAC setup code [AVR32] ATSTK1000: Move gpio-leds setup to setup.c [AVR32] Add support for AT32AP7001 and AT32AP7002 [AVR32] Provide more CPU information in /proc/cpuinfo and dmesg [AVR32] Oprofile support [AVR32] Include instrumentation menu Disable VGA text console for AVR32 architecture [AVR32] Enable debugging only when needed ptrace: Call arch_ptrace_attach() when request=PTRACE_TRACEME [AVR32] Remove redundant try_to_freeze() call from do_signal() [AVR32] Drop GFP_COMP for DMA memory allocations
| * | [AVR32] extint: Set initial irq type to low levelHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | David Brownell pointed out a mismatch in the avr32 extint code: > I noticed a small glitch that's not fixed by this patch: the > initial type is falling edge, but IRQ_TYPE_NONE is mapped to > IRQ_TYPE_LEVEL_LOW. Potentially surprising. Fix it by setting the initial type (and handler) to low level, matching the meaning of IRQ_TYPE_NONE. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] extint: change set_irq_type() handlingDavid Brownell2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | Update the AVR32 EIC code to use the new __set_irq_handler_unlocked() call, getting rid of one more instance of this widespread problem. Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] NMI debuggingHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | Change the NMI handler to use the die notifier chain to signal anyone who cares. Add a simple "nmi debugger" which hooks into this chain and that may dump registers, task state, etc. when it happens. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] constify function pointer tablesJan Engelhardt2008-01-25
| | | | | | | | | | | | | | | Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de> Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] ATNGW100: Update defconfigHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] ATSTK1002: Update defconfigHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | Turn off a few useless options, enable a few useful ones and enable quite a few new drivers. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] Kconfig: Choose daughterboard instead of CPUHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | Remove the CPU selection menu and instead let it be selected by the board or daughterboard option. Add daughterboard selection for ATSTK1000 (this was previously determined based on CPU type.) Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] Add support for ATSTK1003 and ATSTK1004Haavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | ATSTK1003 and ATSTK1004 are CPU daughterboards for ATSTK1000 featuring the AT32AP7001 and AT32AP7002 CPUs, respectively. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] Clean up external DAC setup codeHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | Reduce the ridiculous amount of #ifdef clutter in atstk1002.c a bit by moving all the extdac stuff into its own function and providing an empty stub for the case when it isn't wanted. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] ATSTK1000: Move gpio-leds setup to setup.cHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | There may be other boards than STK1002 that want to use the leds on STK1000. Move it to stk1000 common code to make it easier to reuse. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] Add support for AT32AP7001 and AT32AP7002Haavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | These are derivatives of the AT32AP7000 chip, which means that most of the code stays the same. Rename a few files, functions, definitions and config symbols to reflect that they apply to all AP700x chips, and exclude some platform devices from chips where they aren't present. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] Provide more CPU information in /proc/cpuinfo and dmesgHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the following fields to /proc/cpuinfo: * chip type and revision (from the JTAG chip id) * cpu MHz (from clk_get_rate()) * features (from the CONFIG0 register) Also rename "cpu family" to "cpu arch" and "cpu type" to "cpu core" to remove some ambiguity. Show chip type and revision at bootup, and clarify that the other kinds of IDs that we're already printing are for the cpu core and architecture. Rename "AP7000" to "AP7" since that's the name of the core. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] Oprofile supportHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the necessary architecture code to run oprofile on AVR32 using the performance counters documented by the AVR32 Architecture Manual. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Acked-by: Philippe Elie <phil.el@wanadoo.fr>
| * | [AVR32] Include instrumentation menuHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | Remove KPROBES option from Kconfig.debug and include kernel/Kconfig.instrumentation. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | Disable VGA text console for AVR32 architectureHans-Christian Egtvedt2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch disables the VGA text console for AVR32 architecture since it does not provide the vga.h include file. AVR32 users should use framebuffer console instead if they need a console on an attached display. Signed-off-by: Hans-Christian Egtvedt <hcegtvedt@atmel.com> Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | [AVR32] Enable debugging only when neededHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Keep track of processes being debugged (including the kernel itself) and turn the OCD system on and off as appropriate. Since enabling debugging turns off some optimizations in the CPU core, this fixes the issue that enabling KProbes support or simply running a program under gdbserver will reduce system performance significantly until the next reboot. The CPU performance will still be reduced for all processes while a process is being debugged, but this is a lot better than reducing the performance forever. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
| * | ptrace: Call arch_ptrace_attach() when request=PTRACE_TRACEMEHaavard Skinnemoen2008-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | arch_ptrace_attach() is a hook that allows the architecture to do book-keeping after a ptrace attach. This patch adds a call to this hook when handling a PTRACE_TRACEME request as well. Currently only one architecture, m32r, implements this hook. When called, it initializes a number of debug trap slots in the ptraced task's thread struct, and it looks to me like this is the right thing to do after a PTRACE_TRACEME request as well, not only after PTRACE_ATTACH. Please correct me if I'm wrong. I want to use this hook on AVR32 to turn the debugging hardware on when a process is actually being debugged and keep it off otherwise. To be able to do this, I need to intercept PTRACE_TRACEME and PTRACE_ATTACH, as well as PTRACE_DETACH and thread exit. The latter two can be handled by existing hooks. Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>