aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
...
| * | | powerpc/oprofile: Don't build server oprofile drivers on 64-bit BookEBenjamin Herrenschmidt2010-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | They will fail to build due to the lack of mtmsrd, and wouldn't be useful anyways Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Adjust the page sizes list based on MMU configBenjamin Herrenschmidt2010-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the MMU config registers to scan for available direct and indirect page sizes and print out the result. Will be needed for future hugetlbfs implementation. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Add TLB dump in xmon for Book3EBenjamin Herrenschmidt2010-07-14
| | | | | | | | | | | | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Fix single step when using HW page tablesBenjamin Herrenschmidt2010-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We patch the TLB miss exception vectors to point to alternate functions when using HW page table on BookE. However, we were patching in a new branch in the first instruction of the exception handler instead of the second one, thus overriding the nop that is in the first instruction. This cause problems when single stepping as we rely on that nop for the single step to stop properly within the exception vector range rather than on the target of the branch. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Add generic 64-bit idle powersave supportBenjamin Herrenschmidt2010-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We use a similar technique to ppc32: We set a thread local flag to indicate that we are about to enter or have entered the stop state, and have fixup code in the async interrupt entry code that reacts to this flag to make us return to a different location (sets NIP to LINK in our case). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> -- v2. Fix lockdep bug Re-mask interrupts when coming back from idle
| * | | powerpc/book3e: Resend doorbell exceptions to ourselfMichael Ellerman2010-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we are soft disabled and receive a doorbell exception we don't process it immediately. This means we need to check on the way out of irq restore if there are any doorbell exceptions to process. The problem is at that point we don't know what our regs are, and that in turn makes xmon unhappy. To workaround the problem, instead of checking for and processing doorbells, we check for any doorbells and if there were any we send ourselves another. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Use set_irq_regs() in the msgsnd/msgrcv IPI pathDavid Gibson2010-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | include/asm-generic/irq_regs.h declares per-cpu irq_regs variables and get_irq_regs() and set_irq_regs() helper functions to maintain them. These can be used to access the proper pt_regs structure related to the current interrupt entry (if any). In the powerpc arch code, this is used to maintain irq regs on decrementer and external interrupt exceptions. However, for the doorbell exceptions used by the msgsnd/msgrcv IPI mechanism of newer BookE CPUs, the irq_regs are not kept up to date. In particular this means that xmon will not work properly on SMP, because the secondary xmon instances started by IPI will blow up when they cannot retrieve the irq regs. This patch fixes the problem by adding calls to maintain the irq regs across doorbell exceptions. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Hookup doorbells exceptions on 64-bit Book3EBenjamin Herrenschmidt2010-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note that critical doorbells are an unimplemented stub just like other critical or machine check handlers, since we haven't done support for "levelled" exceptions yet. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Don't re-trigger decrementer on lazy irq restoreBenjamin Herrenschmidt2010-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The decrementer on BookE acts as a level interrupt and doesn't need to be re-triggered when going negative. It doesn't go negative anyways (unless programmed to auto-reload with a negative value) as it stops when reaching 0. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: More doorbell cleanups. Sample the PIR registerBenjamin Herrenschmidt2010-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The doorbells use the content of the PIR register to match messages from other CPUs. This may or may not be the same as our linux CPU number, so using that as the "target" is no right. Instead, we sample the PIR register at boot on every processor and use that value subsequently when sending IPIs. We also use a per-cpu message mask rather than a global array which should limit cache line contention. Note: We could use the CPU number in the device-tree instead of the PIR register, as they are supposed to be equivalent. This might prove useful if doorbells are to be used to kick CPUs out of FW at boot time, thus before we can sample the PIR. This is however not the case now and using the PIR just works. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Move doorbell_exception from traps.c to dbell.cBenjamin Herrenschmidt2010-07-09
| | | | | | | | | | | | | | | | | | | | | | | | ... where it belongs Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: Hack to get gdb moving along on Book3E 64-bitBenjamin Herrenschmidt2010-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our handling of debug interrupts on Book3E 64-bit is not quite the way it should be just yet. This is a workaround to let gdb work at least for now. We ensure that when context switching, we set the appropriate DBCR0 value for the new task. We also make sure that we turn off MSR[DE] within the kernel, and set it as part of the bits that get set when going back to userspace. In the long run, we will probably set the userspace DBCR0 on the exception exit code path and ensure we have some proper kernel value to set on the way into the kernel, a bit like ppc32 does, but that will take more work. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | | powerpc/book3e: mtmsr should not be mtmsrd on book3e 64-bitBenjamin Herrenschmidt2010-07-09
| |/ / | | | | | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Removing dead CONFIG_SMP_750Christoph Egger2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | CONFIG_SMP_750 doesn't exist in Kconfig, therefore removing all references for it from the source code. Signed-off-by: Christoph Egger <siccegge@cs.fau.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/pseries: Use kstrdupJulia Lawall2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use kstrdup when the goal of an allocation is copy a string into the allocated region. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression from,to; expression flag,E1,E2; statement S; @@ - to = kmalloc(strlen(from) + 1,flag); + to = kstrdup(from, flag); ... when != \(from = E1 \| to = E1 \) if (to==NULL || ...) S ... when != \(from = E2 \| to = E2 \) - strcpy(to, from); // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/iseries: Use kstrdupJulia Lawall2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use kstrdup when the goal of an allocation is copy a string into the allocated region. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression from,to; expression flag,E1,E2; statement S; @@ - to = kmalloc(strlen(from) + 1,flag); + to = kstrdup(from, flag); ... when != \(from = E1 \| to = E1 \) if (to==NULL || ...) S ... when != \(from = E2 \| to = E2 \) - strcpy(to, from); // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/numa: Use form 1 affinity to setup node distanceAnton Blanchard2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Form 1 affinity allows multiple entries in ibm,associativity-reference-points which represent affinity domains in decreasing order of importance. The Linux concept of a node is always the first entry, but using the other values as an input to node_distance() allows the memory allocator to make better decisions on which node to go first when local memory has been exhausted. We keep things simple and create an array indexed by NUMA node, capped at 4 entries. Each time we lookup an associativity property we initialise the array which is overkill, but since we should only hit this path during boot it didn't seem worth adding a per node valid bit. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Remove all rcu head initializationsPaul E. McKenney2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove all rcu head inits. We don't care about the RCU head state before passing it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can keep track of objects on stack. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Add i8042 keyboard and mouse irq parsingMartyn Welch2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the irqs for the i8042, which historically provides keyboard and mouse (aux) support, is hardwired in the driver rather than parsing the dts. This patch modifies the powerpc legacy IO code to attempt to parse the device tree for this information, failing back to the hardcoded values if it fails. Signed-off-by: Martyn Welch <martyn.welch@ge.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/pseries: Add WARN_ON() to request_event_sources_irqs() on irq ↵Mark Nelson2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | allocation/request failure At the moment if request_event_sources_irqs() can't allocate or request the interrupt, it just does a KERN_ERR printk. This may be fine for the existing RAS code where if we miss an EPOW event it just means that the event won't be logged and if we miss one of the RAS errors then we could miss an event that we perhaps should take action on. But, for the upcoming IO events code that will use event-sources if we can't allocate or request the interrupt it means we'd potentially miss an interrupt from the device. So, let's add a WARN_ON() in this error case so that we're a bit more vocal when something's amiss. While we're at it, also use pr_err() to neaten the code up a bit. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/pseries: Rename RAS_VECTOR_OFFSET to RTAS_VECTOR_EXTERNAL_INTERRUPT ↵Mark Nelson2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | and move to rtas.h The RAS code has a #define, RAS_VECTOR_OFFSET, that's used in the check-exception RTAS call for the vector offset of the exception. We'll be using this same vector offset for the upcoming IO Event interrupts code (0x500) so let's move it to include/asm/rtas.h and call it RTAS_VECTOR_EXTERNAL_INTERRUPT. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Optimise per cpu accesses on 64bitAnton Blanchard2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now we dynamically allocate the paca array, it takes an extra load whenever we want to access another cpu's paca. One place we do that a lot is per cpu variables. A simple example: DEFINE_PER_CPU(unsigned long, vara); unsigned long test4(int cpu) { return per_cpu(vara, cpu); } This takes 4 loads, 5 if you include the actual load of the per cpu variable: ld r11,-32760(r30) # load address of paca pointer ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,9 # get offset into paca (each entry is 512 bytes) ld r0,0(r11) # load paca pointer add r3,r0,r3 # paca + offset ld r11,64(r3) # load paca[cpu].data_offset ldx r3,r9,r11 # load per cpu variable If we remove the ppc64 specific per_cpu_offset(), we get the generic one which indexes into a statically allocated array. This removes one load and one add: ld r11,-32760(r30) # load address of __per_cpu_offset ld r9,-32768(r30) # load link address of percpu variable sldi r3,r29,3 # get offset into __per_cpu_offset (each entry 8 bytes) ldx r11,r11,r3 # load __per_cpu_offset[cpu] ldx r3,r9,r11 # load per cpu variable Having all the offsets in one array also helps when iterating over a per cpu variable across a number of cpus, such as in the scheduler. Before we would need to load one paca cacheline when calculating each per cpu offset. Now we have 16 (128 / sizeof(long)) per cpu offsets in each cacheline. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/iseries: Fix constant warningDenis Kirjanov2010-07-08
| | | | | | | | | | | | | | | | | | | | | Fix smatch warning: constant 0x8000000000000000 is so big it is unsigned long Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/mpic: Add ability to reset a core via MPICMatthew McClintock2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | We need the ability to reset cores for use with kexec/kdump for SMP systems. Calling this function with the specific core you want to reset will cause the CPU to spin in reset. Signed-off-by: Matthew McClintock <msm@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/fsl-booke: Fix comments in mmu code that mention BATSBecky Bruce2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are no BATS on BookE - we have the TLBCAM instead. Also correct the page size information to included extended sizes. We don't actually allow a 4G page size to be used, so comment on that as well. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/pseries/eeh: Use for_each_pci_dev()Kulikov Vasiliy2010-07-08
| | | | | | | | | | | | | | | | | | | | | Use for_each_pci_dev() to simplify the code. Signed-off-by: Kulikov Vasiliy <segooon@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | hvc_console: use "*_console" nomenclature to avoid modpost warning.Chris Metcalf2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The use of "hvc_con_driver" as the name for a file-static "struct console" with a ".setup" field pointing to an __init function causes a modpost warning, since a non-initdata structure points to init code. Using "hvc_console" as the name triggers the hacky "*_console" workaround in modpost to silence the warning, and is the same thing that most of the other console drivers already do. I made the same change in hvsi.c since I happened to notice it was likely to suffer from the same problem. Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/pseries: Partition hibernation supportBrian King2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enables support for HMC initiated partition hibernation. This is a firmware assisted hibernation, since the firmware handles writing the memory out to disk, along with other partition information, so we just mimic suspend to ram. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/pseries: Migration code reorganization / hibernation prepBrian King2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Partition hibernation will use some of the same code as is currently used for Live Partition Migration. This function further abstracts this code such that code outside of rtas.c can utilize it. It also changes the error field in the suspend me data structure to be an atomic type, since it is set and checked on different cpus without any barriers or locking. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Clean up obsolete code relating to decrementer and timebasePaul Mackerras2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the decrementer and timekeeping code was moved over to using the generic clockevents and timekeeping infrastructure, several variables and functions have been obsolete and effectively unused. This deletes them. In particular, wakeup_decrementer() is no longer needed since the generic code reprograms the decrementer as part of the process of resuming the timekeeping code, which happens during sysdev resume. Thus the wakeup_decrementer calls in the suspend_enter methods for 52xx platforms have been removed. The call in the powermac cpu frequency change code has been replaced by set_dec(1), which will cause a timer interrupt as soon as interrupts are enabled, and the generic code will then reprogram the decrementer with the correct value. This also simplifies the generic_suspend_en/disable_irqs functions and makes them static since they are not referenced outside time.c. The preempt_enable/disable calls are removed because the generic code has disabled all but the boot cpu at the point where these functions are called, so we can't be moved to another cpu. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc: Rework VDSO gettimeofday to prevent time going backwardsPaul Mackerras2010-07-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently it is possible for userspace to see the result of gettimeofday() going backwards by 1 microsecond, assuming that userspace is using the gettimeofday() in the VDSO. The VDSO gettimeofday() algorithm computes the time in "xsecs", which are units of 2^-20 seconds, or approximately 0.954 microseconds, using the algorithm now = (timebase - tb_orig_stamp) * tb_to_xs + stamp_xsec and then converts the time in xsecs to seconds and microseconds. The kernel updates the tb_orig_stamp and stamp_xsec values every tick in update_vsyscall(). If the length of the tick is not an integer number of xsecs, then some precision is lost in converting the current time to xsecs. For example, with CONFIG_HZ=1000, the tick is 1ms long, which is 1048.576 xsecs. That means that stamp_xsec will advance by either 1048 or 1049 on each tick. With the right conditions, it is possible for userspace to get (timebase - tb_orig_stamp) * tb_to_xs being 1049 if the kernel is slightly late in updating the vdso_datapage, and then for stamp_xsec to advance by 1048 when the kernel does update it, and for userspace to then see (timebase - tb_orig_stamp) * tb_to_xs being zero due to integer truncation. The result is that time appears to go backwards by 1 microsecond. To fix this we change the VDSO gettimeofday to use a new field in the VDSO datapage which stores the nanoseconds part of the time as a fractional number of seconds in a 0.32 binary fraction format. (Or put another way, as a 32-bit number in units of 0.23283 ns.) This is convenient because we can use the mulhwu instruction to convert it to either microseconds or nanoseconds. Since it turns out that computing the time of day using this new field is simpler than either using stamp_xsec (as gettimeofday does) or stamp_xtime.tv_nsec (as clock_gettime does), this converts both gettimeofday and clock_gettime to use the new field. The existing __do_get_tspec function is converted to use the new field and take a parameter in r7 that indicates the desired resolution, 1,000,000 for microseconds or 1,000,000,000 for nanoseconds. The __do_get_xsec function is then unused and is deleted. The new algorithm is now = ((timebase - tb_orig_stamp) << 12) * tb_to_xs + (stamp_xtime_seconds << 32) + stamp_sec_fraction with 'now' in units of 2^-32 seconds. That is then converted to seconds and either microseconds or nanoseconds with seconds = now >> 32 partseconds = ((now & 0xffffffff) * resolution) >> 32 The 32-bit VDSO code also makes a further simplification: it ignores the bottom 32 bits of the tb_to_xs value, which is a 0.64 format binary fraction. Doing so gets rid of 4 multiply instructions. Assuming a timebase frequency of 1GHz or less and an update interval of no more than 10ms, the upper 32 bits of tb_to_xs will be at least 4503599, so the error from ignoring the low 32 bits will be at most 2.2ns, which is more than an order of magnitude less than the time taken to do gettimeofday or clock_gettime on our fastest processors, so there is no possibility of seeing inconsistent values due to this. This also moves update_gtod() down next to its only caller, and makes update_vsyscall use the time passed in via the wall_time argument rather than accessing xtime directly. At present, wall_time always points to xtime, but that could change in future. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | Merge commit 'paulus-perf/master' into nextBenjamin Herrenschmidt2010-07-08
| |\ \
| | * | powerpc, hw_breakpoint: Tell generic code we have no instruction breakpointsPaul Mackerras2010-06-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At present, hw_breakpoint_slots() returns 1 regardless of what type of breakpoint is specified in the type argument. Since we don't define CONFIG_HAVE_MIXED_BREAKPOINTS_REGS, there are separate values for TYPE_INST and TYPE_DATA, and hw_breakpoint_slots() returns 1 for both, effectively advertising instruction breakpoint support which doesn't exist. This fixes it by making hw_breakpoint_slots return 1 for TYPE_DATA and 0 for TYPE_INST. This moves hw_breakpoint_slots() from the powerpc hw_breakpoint.h to hw_breakpoint.c because the definitions of TYPE_INST and TYPE_DATA aren't available in <asm/hw_breakpoint.h>. They are defined in <linux/hw_breakpoint.h> but we can't include that header in <asm/hw_breakpoint.h>, and nor can we rely on <linux/hw_breakpoint.h> being included before <asm/hw_breakpoint.h>. Since hw_breakpoint_slots() is only called at boot time, there is no performance impact from making it a real function rather than a static inline. Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | powerpc, hw_breakpoint: Cooperate better with other single-steppersPaul Mackerras2010-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code we had to clear the MSR_SE bit was not doing anything because the caller (ultimately single_step_exception() in traps.c) had already cleared. Instead of trying to leave MSR_SE set if the TIF_SINGLESTEP flag is set (which indicates that the process is being single-stepped by ptrace), we instead return NOTIFY_DONE in that case, which means the caller will generate a SIGTRAP for the process. Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | powerpc, hw_breakpoint: Fix off-by-one in checking access addressPaul Mackerras2010-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code would accept an access to an address one byte past the end of the requested range as legitimate, due to having a "<=" rather than a "<". This fixes that and cleans up the code a bit. Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | powerpc, hw_breakpoint: Discard extraneous interrupt due to accesses outside ↵K.Prasad2010-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | symbol length Many a times, the requested breakpoint length can be less than the fixed breakpoint length i.e. 8 bytes supported by PowerPC 64-bit server (Book III S) processors. This could lead to extraneous interrupts resulting in false breakpoint notifications. This detects and discards such interrupts for non-ptrace requests. We don't change ptrace behaviour to avoid breaking compatability. [Suggestion from Paul Mackerras <paulus@samba.org> to add a new flag in 'struct arch_hw_breakpoint' to identify extraneous interrupts] Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | powerpc, hw_breakpoint: Enable hw-breakpoints while handling intervening signalsK.Prasad2010-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A signal delivered between a hw_breakpoint_handler() and the single_step_dabr_instruction() will not have the breakpoint active while the signal handler is running -- the signal delivery will set up a new MSR value which will not have MSR_SE set, so we won't get the signal step interrupt until and unless the signal handler returns (which it may never do). To fix this, we restore the breakpoint when delivering a signal -- we clear the MSR_SE bit and set the DABR again. If the signal handler returns, the DABR interrupt will occur again when the instruction that we were originally trying to single-step gets re-executed. [Paul Mackerras <paulus@samba.org> pointed out the need to do this.] Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | powerpc, hw_breakpoint: Handle concurrent alignment interruptsK.Prasad2010-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an alignment interrupt occurs on an instruction that is being single-stepped, the alignment interrupt handler currently handles the single-step condition by unconditionally sending a SIGTRAP to the process. Other synchronous interrupts that result in the instruction being emulated do likewise. With hw_breakpoint support, the hw_breakpoint code needs to be able to intercept these single-step events as well as those where the instruction executes normally and a trace interrupt happens. Fix this by making emulate_single_step() use the existing single_step_exception() function instead of calling _exception() directly. We then make single_step_exception() use the abstracted clear_single_step() rather than clearing bits in the MSR image directly so that emulate_single_step() will continue to work correctly on Book 3E processors. Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit server processorsK.Prasad2010-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement perf-events based hw-breakpoint interfaces for PowerPC 64-bit server (Book III S) processors. This allows access to a given location to be used as an event that can be counted or profiled by the perf_events subsystem. This is done using the DABR (data breakpoint register), which can also be used for process debugging via ptrace. When perf_event hw_breakpoint support is configured in, the perf_event subsystem manages the DABR and arbitrates access to it, and ptrace then creates a perf_event when it is requested to set a data breakpoint. [Adopted suggestions from Paul Mackerras <paulus@samba.org> to - emulate_step() all system-wide breakpoints and single-step only the per-task breakpoints - perform arch-specific cleanup before unregistration through arch_unregister_hw_breakpoint() ] Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | hw_breakpoints: Allow arch-specific cleanup before breakpoint unregistrationK.Prasad2010-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Certain architectures (such as PowerPC) have a need to clean up data structures before a breakpoint is unregistered. This introduces an arch-specific hook in release_bp_slot() along with a weak definition in the form of a stub function. Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| | * | powerpc: Emulate most Book I instructions in emulate_step()Paul Mackerras2010-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This extends the emulate_step() function to handle a large proportion of the Book I instructions implemented on current 64-bit server processors. The aim is to handle all the load and store instructions used in the kernel, plus all of the instructions that appear between l[wd]arx and st[wd]cx., so this handles the Altivec/VMX lvx and stvx and the VSX lxv2dx and stxv2dx instructions (implemented in POWER7). The new code can emulate user mode instructions, and checks the effective address for a load or store if the saved state is for user mode. It doesn't handle little-endian mode at present. For floating-point, Altivec/VMX and VSX instructions, it checks that the saved MSR has the enable bit for the relevant facility set, and if so, assumes that the FP/VMX/VSX registers contain valid state, and does loads or stores directly to/from the FP/VMX/VSX registers, using assembly helpers in ldstfp.S. Instructions supported now include: * Loads and stores, including some but not all VMX and VSX instructions, and lmw/stmw * Atomic loads and stores (l[dw]arx, st[dw]cx.) * Arithmetic instructions (add, subtract, multiply, divide, etc.) * Compare instructions * Rotate and mask instructions * Shift instructions * Logical instructions (and, or, xor, etc.) * Condition register logical instructions * mtcrf, cntlz[wd], exts[bhw] * isync, sync, lwsync, ptesync, eieio * Cache operations (dcbf, dcbst, dcbt, dcbtst) The overflow-checking arithmetic instructions are not included, but they appear not to be ever used in C code. This uses decimal values for the minor opcodes in the switch statements because that is what appears in the Power ISA specification, thus it is easier to check that they are correct if they are in decimal. If this is used to single-step an instruction where a data breakpoint interrupt occurred, then there is the possibility that the instruction is a lwarx or ldarx. In that case we have to be careful not to lose the reservation until we get to the matching st[wd]cx., or we'll never make forward progress. One alternative is to try to arrange that we can return from interrupts and handle data breakpoint interrupts without losing the reservation, which means not using any spinlocks, mutexes, or atomic ops (including bitops). That seems rather fragile. The other alternative is to emulate the larx/stcx and all the instructions in between. This is why this commit adds support for a wide range of integer instructions. Signed-off-by: Paul Mackerras <paulus@samba.org>
* | | | Merge branch 'for-linus' of git://git.monstr.eu/linux-2.6-microblazeLinus Torvalds2010-08-05
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://git.monstr.eu/linux-2.6-microblaze: (49 commits) microblaze: Add KGDB support microblaze: Support brki rX, 0x18 for user application debugging microblaze: Remove nop after MSRCLR/SET, MTS, MFS instructions microblaze: Simplify syscall rutine microblaze: Move PT_MODE saving to delay slot microblaze: Fix _interrupt function microblaze: Fix _user_exception function microblaze: Put together addik instructions microblaze: Use delay slot in syscall macros microblaze: Save kernel mode in delay slot microblaze: Do not mix register saving and mode setting microblaze: Move SAVE_STATE upward microblaze: entry.S: Macro optimization microblaze: Optimize hw exception rutine microblaze: Implement clear_ums macro and fix SAVE_STATE macro microblaze: Remove additional setup for kernel_mode microblaze: Optimize SAVE_STATE macro microblaze: Remove additional loading microblaze: Completely remove working with R11 register microblaze: Do not setup BIP in _debug_exception ...
| * | | | microblaze: Add KGDB supportMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kgdb uses brki r16, 0x18 instruction to call low level _debug_exception function which save current state to pt_regs and call microblaze_kgdb_break function. _debug_exception should be called only from the kernel space. User space calling is not supported because user application debugging uses different handling. pt_regs_to_gdb_regs loads additional special registers which can't be changed * Enable KGDB in Kconfig * Remove ancient not-tested KGDB support * Remove ancient _debug_exception code from entry.S Only MMU KGDB support is supported. Signed-off-by: Michal Simek <monstr@monstr.eu> CC: Jason Wessel <jason.wessel@windriver.com> CC: John Williams <john.williams@petalogix.com> CC: Edgar E. Iglesias <edgar.iglesias@petalogix.com> CC: linux-kernel@vger.kernel.org Acked-by: Jason Wessel <jason.wessel@windriver.com>
| * | | | microblaze: Support brki rX, 0x18 for user application debuggingMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the first patch which add support for user application debugging through brki rX, 0x18 vector. This patch has side effect which also remove security issue to use brki rX, 0x18 to freeze kernel. Support for old gdb support via priviledged exception (brk r0, r0) is still there. It will be remove in future. Signed-off-by: Michal Simek <monstr@monstr.eu>
| * | | | microblaze: Remove nop after MSRCLR/SET, MTS, MFS instructionsMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to save instruction and the latest Microblaze shouldn't have any problem with it. Signed-off-by: Michal Simek <monstr@monstr.eu>
| * | | | microblaze: Simplify syscall rutineMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Syscall can be called only from userspace that's why we don't need to check which space kernel come from. Kernel syscall calling is not check and shouldn't come throught this part of code. Signed-off-by: Michal Simek <monstr@monstr.eu>
| * | | | microblaze: Move PT_MODE saving to delay slotMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can save one more instruction if PT_MODE is saved in delay slot Signed-off-by: Michal Simek <monstr@monstr.eu>
| * | | | microblaze: Fix _interrupt functionMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Save instructions by using delay slot and clear UMS only if kernel comes from user space. Signed-off-by: Michal Simek <monstr@monstr.eu>
| * | | | microblaze: Fix _user_exception functionMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving some instructions. Clear VMS bit if kernel comes from kernel space. Signed-off-by: Michal Simek <monstr@monstr.eu>
| * | | | microblaze: Put together addik instructionsMichal Simek2010-08-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving instructions by adding 2/3 addik instructions to one. Signed-off-by: Michal Simek <monstr@monstr.eu>