diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2016-10-15 12:26:12 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-10-15 12:26:12 -0400 |
commit | 133d970e0dadf7b413db19893acc5b26664bf4a1 (patch) | |
tree | ea10732ca1d0f663ef1319973947a7c72cf170e7 /arch/mips/kernel | |
parent | 050aaeab99067b6a08b34274ff15ca5dbb94a160 (diff) | |
parent | 38b8767462120c62a5046b529c80b06861f9ac85 (diff) |
Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS updates from Ralf Baechle:
"This is the main MIPS pull request for 4.9:
MIPS core arch code:
- traps: 64bit kernels should read CP0_EBase 64bit
- traps: Convert ebase to KSEG0
- c-r4k: Drop bc_wback_inv() from icache flush
- c-r4k: Split user/kernel flush_icache_range()
- cacheflush: Use __flush_icache_user_range()
- uprobes: Flush icache via kernel address
- KVM: Use __local_flush_icache_user_range()
- c-r4k: Fix flush_icache_range() for EVA
- Fix -mabi=64 build of vdso.lds
- VDSO: Drop duplicated -I*/-E* aflags
- tracing: move insn_has_delay_slot to a shared header
- tracing: disable uprobe/kprobe on compact branch instructions
- ptrace: Fix regs_return_value for kernel context
- Squash lines for simple wrapper functions
- Move identification of VP(E) into proc.c from smp-mt.c
- Add definitions of SYNC barrierstype values
- traps: Ensure full EBase is written
- tlb-r4k: If there are wired entries, don't use TLBINVF
- Sanitise coherentio semantics
- dma-default: Don't check hw_coherentio if device is non-coherent
- Support per-device DMA coherence
- Adjust MIPS64 CAC_BASE to reflect Config.K0
- Support generating Flattened Image Trees (.itb)
- generic: Introduce generic DT-based board support
- generic: Convert SEAD-3 to a generic board
- Enable hardened usercopy
- Don't specify STACKPROTECTOR in defconfigs
Octeon:
- Delete dead code and files across the platform.
- Change to use all memory into use by default.
- Rename upper case variables in setup code to lowercase.
- Delete legacy hack for broken bootloaders.
- Leave maintaining the link state to the actual ethernet/PHY drivers.
- Add DTS for D-Link DSR-500N.
- Fix PCI interrupt routing on D-Link DSR-500N.
Pistachio:
- Remove ANDROID_TIMED_OUTPUT from defconfig
TX39xx:
- Move GPIO setup from .mem_setup() to .arch_init()
- Convert to Common Clock Framework
TX49xx:
- Move GPIO setup from .mem_setup() to .arch_init()
- Convert to Common Clock Framework
txx9wdt:
- Add missing clock (un)prepare calls for CCF
BMIPS:
- Add PW, GPIO SDHCI and NAND device node names
- Support APPENDED_DTB
- Add missing bcm97435svmb to DT_NONE
- Rename bcm96358nb4ser to bcm6358-neufbox4-sercom
- Add DT examples for BCM63268, BCM3368 and BCM6362
- Add support for BCM3368 and BCM6362
PCI
- Reduce stack frame usage
- Use struct list_head lists
- Support for CONFIG_PCI_DOMAINS_GENERIC
- Make pcibios_set_cache_line_size an initcall
- Inline pcibios_assign_all_busses
- Split pci.c into pci.c & pci-legacy.c
- Introduce CONFIG_PCI_DRIVERS_LEGACY
- Support generic drivers
CPC
- Convert bare 'unsigned' to 'unsigned int'
- Avoid lock when MIPS CM >= 3 is present
GIC:
- Delete unused file smp-gic.c
mt7620:
- Delete unnecessary assignment for the field "owner" from PCI
BCM63xx:
- Let clk_disable() return immediately if clk is NULL
pm-cps:
- Change FSB workaround to CPU blacklist
- Update comments on barrier instructions
- Use MIPS standard lightweight ordering barrier
- Use MIPS standard completion barrier
- Remove selection of sync types
- Add MIPSr6 CPU support
- Support CM3 changes to Coherence Enable Register
SMP:
- Wrap call to mips_cpc_lock_other in mips_cm_lock_other
- Introduce mechanism for freeing and allocating IPIs
cpuidle:
- cpuidle-cps: Enable use with MIPSr6 CPUs.
SEAD3:
- Rewrite to use DT and generic kernel feature.
USB:
- host: ehci-sead3: Remove SEAD-3 EHCI code
FBDEV:
- cobalt_lcdfb: Drop SEAD3 support
dt-bindings:
- Document a binding for simple ASCII LCDs
auxdisplay:
- img-ascii-lcd: driver for simple ASCII LCD displays
irqchip i8259:
- i8259: Add domain before mapping parent irq
- i8259: Allow platforms to override poll function
- i8259: Remove unused i8259A_irq_pending
Malta:
- Rewrite to use DT
of/platform:
- Probe "isa" busses by default
CM:
- Print CM error reports upon bus errors
Module:
- Migrate exception table users off module.h and onto extable.h
- Make various drivers explicitly non-modular:
- Audit and remove any unnecessary uses of module.h
mailmap:
- Canonicalize to Qais' current email address.
Documentation:
- MIPS supports HAVE_REGS_AND_STACK_ACCESS_API
Loongson1C:
- Add CPU support for Loongson1C
- Add board support
- Add defconfig
- Add RTC support for Loongson1C board
All this except one Documentation fix has sat in linux-next and has
survived Imagination's automated build test system"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (127 commits)
Documentation: MIPS supports HAVE_REGS_AND_STACK_ACCESS_API
MIPS: ptrace: Fix regs_return_value for kernel context
MIPS: VDSO: Drop duplicated -I*/-E* aflags
MIPS: Fix -mabi=64 build of vdso.lds
MIPS: Enable hardened usercopy
MIPS: generic: Convert SEAD-3 to a generic board
MIPS: generic: Introduce generic DT-based board support
MIPS: Support generating Flattened Image Trees (.itb)
MIPS: Adjust MIPS64 CAC_BASE to reflect Config.K0
MIPS: Print CM error reports upon bus errors
MIPS: Support per-device DMA coherence
MIPS: dma-default: Don't check hw_coherentio if device is non-coherent
MIPS: Sanitise coherentio semantics
MIPS: PCI: Support generic drivers
MIPS: PCI: Introduce CONFIG_PCI_DRIVERS_LEGACY
MIPS: PCI: Split pci.c into pci.c & pci-legacy.c
MIPS: PCI: Inline pcibios_assign_all_busses
MIPS: PCI: Make pcibios_set_cache_line_size an initcall
MIPS: PCI: Support for CONFIG_PCI_DOMAINS_GENERIC
MIPS: PCI: Use struct list_head lists
...
Diffstat (limited to 'arch/mips/kernel')
-rw-r--r-- | arch/mips/kernel/binfmt_elfn32.c | 8 | ||||
-rw-r--r-- | arch/mips/kernel/binfmt_elfo32.c | 8 | ||||
-rw-r--r-- | arch/mips/kernel/branch.c | 36 | ||||
-rw-r--r-- | arch/mips/kernel/kprobes.c | 67 | ||||
-rw-r--r-- | arch/mips/kernel/linux32.c | 1 | ||||
-rw-r--r-- | arch/mips/kernel/mips-cpc.c | 17 | ||||
-rw-r--r-- | arch/mips/kernel/mips-r2-to-r6-emul.c | 1 | ||||
-rw-r--r-- | arch/mips/kernel/module.c | 1 | ||||
-rw-r--r-- | arch/mips/kernel/pm-cps.c | 160 | ||||
-rw-r--r-- | arch/mips/kernel/probes-common.h | 83 | ||||
-rw-r--r-- | arch/mips/kernel/proc.c | 7 | ||||
-rw-r--r-- | arch/mips/kernel/smp-gic.c | 66 | ||||
-rw-r--r-- | arch/mips/kernel/smp-mt.c | 23 | ||||
-rw-r--r-- | arch/mips/kernel/smp.c | 65 | ||||
-rw-r--r-- | arch/mips/kernel/traps.c | 53 | ||||
-rw-r--r-- | arch/mips/kernel/uprobes.c | 88 |
16 files changed, 345 insertions, 339 deletions
diff --git a/arch/mips/kernel/binfmt_elfn32.c b/arch/mips/kernel/binfmt_elfn32.c index 58ad63d7eb42..9c7f3e136d50 100644 --- a/arch/mips/kernel/binfmt_elfn32.c +++ b/arch/mips/kernel/binfmt_elfn32.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Support for n32 Linux/MIPS ELF binaries. | 2 | * Support for n32 Linux/MIPS ELF binaries. |
3 | * Author: Ralf Baechle (ralf@linux-mips.org) | ||
3 | * | 4 | * |
4 | * Copyright (C) 1999, 2001 Ralf Baechle | 5 | * Copyright (C) 1999, 2001 Ralf Baechle |
5 | * Copyright (C) 1999, 2001 Silicon Graphics, Inc. | 6 | * Copyright (C) 1999, 2001 Silicon Graphics, Inc. |
@@ -37,7 +38,6 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; | |||
37 | #define ELF_ET_DYN_BASE (TASK32_SIZE / 3 * 2) | 38 | #define ELF_ET_DYN_BASE (TASK32_SIZE / 3 * 2) |
38 | 39 | ||
39 | #include <asm/processor.h> | 40 | #include <asm/processor.h> |
40 | #include <linux/module.h> | ||
41 | #include <linux/elfcore.h> | 41 | #include <linux/elfcore.h> |
42 | #include <linux/compat.h> | 42 | #include <linux/compat.h> |
43 | #include <linux/math64.h> | 43 | #include <linux/math64.h> |
@@ -96,12 +96,6 @@ jiffies_to_compat_timeval(unsigned long jiffies, struct compat_timeval *value) | |||
96 | 96 | ||
97 | #define ELF_CORE_EFLAGS EF_MIPS_ABI2 | 97 | #define ELF_CORE_EFLAGS EF_MIPS_ABI2 |
98 | 98 | ||
99 | MODULE_DESCRIPTION("Binary format loader for compatibility with n32 Linux/MIPS binaries"); | ||
100 | MODULE_AUTHOR("Ralf Baechle (ralf@linux-mips.org)"); | ||
101 | |||
102 | #undef MODULE_DESCRIPTION | ||
103 | #undef MODULE_AUTHOR | ||
104 | |||
105 | #undef TASK_SIZE | 99 | #undef TASK_SIZE |
106 | #define TASK_SIZE TASK_SIZE32 | 100 | #define TASK_SIZE TASK_SIZE32 |
107 | 101 | ||
diff --git a/arch/mips/kernel/binfmt_elfo32.c b/arch/mips/kernel/binfmt_elfo32.c index 49fb881481f7..1ab34322dd97 100644 --- a/arch/mips/kernel/binfmt_elfo32.c +++ b/arch/mips/kernel/binfmt_elfo32.c | |||
@@ -1,5 +1,6 @@ | |||
1 | /* | 1 | /* |
2 | * Support for o32 Linux/MIPS ELF binaries. | 2 | * Support for o32 Linux/MIPS ELF binaries. |
3 | * Author: Ralf Baechle (ralf@linux-mips.org) | ||
3 | * | 4 | * |
4 | * Copyright (C) 1999, 2001 Ralf Baechle | 5 | * Copyright (C) 1999, 2001 Ralf Baechle |
5 | * Copyright (C) 1999, 2001 Silicon Graphics, Inc. | 6 | * Copyright (C) 1999, 2001 Silicon Graphics, Inc. |
@@ -42,7 +43,6 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; | |||
42 | 43 | ||
43 | #include <asm/processor.h> | 44 | #include <asm/processor.h> |
44 | 45 | ||
45 | #include <linux/module.h> | ||
46 | #include <linux/elfcore.h> | 46 | #include <linux/elfcore.h> |
47 | #include <linux/compat.h> | 47 | #include <linux/compat.h> |
48 | #include <linux/math64.h> | 48 | #include <linux/math64.h> |
@@ -99,12 +99,6 @@ jiffies_to_compat_timeval(unsigned long jiffies, struct compat_timeval *value) | |||
99 | value->tv_usec = rem / NSEC_PER_USEC; | 99 | value->tv_usec = rem / NSEC_PER_USEC; |
100 | } | 100 | } |
101 | 101 | ||
102 | MODULE_DESCRIPTION("Binary format loader for compatibility with o32 Linux/MIPS binaries"); | ||
103 | MODULE_AUTHOR("Ralf Baechle (ralf@linux-mips.org)"); | ||
104 | |||
105 | #undef MODULE_DESCRIPTION | ||
106 | #undef MODULE_AUTHOR | ||
107 | |||
108 | #undef TASK_SIZE | 102 | #undef TASK_SIZE |
109 | #define TASK_SIZE TASK_SIZE32 | 103 | #define TASK_SIZE TASK_SIZE32 |
110 | 104 | ||
diff --git a/arch/mips/kernel/branch.c b/arch/mips/kernel/branch.c index 46c227fc98f5..12c718181e5e 100644 --- a/arch/mips/kernel/branch.c +++ b/arch/mips/kernel/branch.c | |||
@@ -9,7 +9,7 @@ | |||
9 | #include <linux/kernel.h> | 9 | #include <linux/kernel.h> |
10 | #include <linux/sched.h> | 10 | #include <linux/sched.h> |
11 | #include <linux/signal.h> | 11 | #include <linux/signal.h> |
12 | #include <linux/module.h> | 12 | #include <linux/export.h> |
13 | #include <asm/branch.h> | 13 | #include <asm/branch.h> |
14 | #include <asm/cpu.h> | 14 | #include <asm/cpu.h> |
15 | #include <asm/cpu-features.h> | 15 | #include <asm/cpu-features.h> |
@@ -866,3 +866,37 @@ unaligned: | |||
866 | force_sig(SIGBUS, current); | 866 | force_sig(SIGBUS, current); |
867 | return -EFAULT; | 867 | return -EFAULT; |
868 | } | 868 | } |
869 | |||
870 | #if (defined CONFIG_KPROBES) || (defined CONFIG_UPROBES) | ||
871 | |||
872 | int __insn_is_compact_branch(union mips_instruction insn) | ||
873 | { | ||
874 | if (!cpu_has_mips_r6) | ||
875 | return 0; | ||
876 | |||
877 | switch (insn.i_format.opcode) { | ||
878 | case blezl_op: | ||
879 | case bgtzl_op: | ||
880 | case blez_op: | ||
881 | case bgtz_op: | ||
882 | /* | ||
883 | * blez[l] and bgtz[l] opcodes with non-zero rt | ||
884 | * are MIPS R6 compact branches | ||
885 | */ | ||
886 | if (insn.i_format.rt) | ||
887 | return 1; | ||
888 | break; | ||
889 | case bc6_op: | ||
890 | case balc6_op: | ||
891 | case pop10_op: | ||
892 | case pop30_op: | ||
893 | case pop66_op: | ||
894 | case pop76_op: | ||
895 | return 1; | ||
896 | } | ||
897 | |||
898 | return 0; | ||
899 | } | ||
900 | EXPORT_SYMBOL_GPL(__insn_is_compact_branch); | ||
901 | |||
902 | #endif /* CONFIG_KPROBES || CONFIG_UPROBES */ | ||
diff --git a/arch/mips/kernel/kprobes.c b/arch/mips/kernel/kprobes.c index 212f46f2014e..f5c8bce70db2 100644 --- a/arch/mips/kernel/kprobes.c +++ b/arch/mips/kernel/kprobes.c | |||
@@ -32,7 +32,8 @@ | |||
32 | #include <asm/ptrace.h> | 32 | #include <asm/ptrace.h> |
33 | #include <asm/branch.h> | 33 | #include <asm/branch.h> |
34 | #include <asm/break.h> | 34 | #include <asm/break.h> |
35 | #include <asm/inst.h> | 35 | |
36 | #include "probes-common.h" | ||
36 | 37 | ||
37 | static const union mips_instruction breakpoint_insn = { | 38 | static const union mips_instruction breakpoint_insn = { |
38 | .b_format = { | 39 | .b_format = { |
@@ -55,63 +56,7 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); | |||
55 | 56 | ||
56 | static int __kprobes insn_has_delayslot(union mips_instruction insn) | 57 | static int __kprobes insn_has_delayslot(union mips_instruction insn) |
57 | { | 58 | { |
58 | switch (insn.i_format.opcode) { | 59 | return __insn_has_delay_slot(insn); |
59 | |||
60 | /* | ||
61 | * This group contains: | ||
62 | * jr and jalr are in r_format format. | ||
63 | */ | ||
64 | case spec_op: | ||
65 | switch (insn.r_format.func) { | ||
66 | case jr_op: | ||
67 | case jalr_op: | ||
68 | break; | ||
69 | default: | ||
70 | goto insn_ok; | ||
71 | } | ||
72 | |||
73 | /* | ||
74 | * This group contains: | ||
75 | * bltz_op, bgez_op, bltzl_op, bgezl_op, | ||
76 | * bltzal_op, bgezal_op, bltzall_op, bgezall_op. | ||
77 | */ | ||
78 | case bcond_op: | ||
79 | |||
80 | /* | ||
81 | * These are unconditional and in j_format. | ||
82 | */ | ||
83 | case jal_op: | ||
84 | case j_op: | ||
85 | |||
86 | /* | ||
87 | * These are conditional and in i_format. | ||
88 | */ | ||
89 | case beq_op: | ||
90 | case beql_op: | ||
91 | case bne_op: | ||
92 | case bnel_op: | ||
93 | case blez_op: | ||
94 | case blezl_op: | ||
95 | case bgtz_op: | ||
96 | case bgtzl_op: | ||
97 | |||
98 | /* | ||
99 | * These are the FPA/cp1 branch instructions. | ||
100 | */ | ||
101 | case cop1_op: | ||
102 | |||
103 | #ifdef CONFIG_CPU_CAVIUM_OCTEON | ||
104 | case lwc2_op: /* This is bbit0 on Octeon */ | ||
105 | case ldc2_op: /* This is bbit032 on Octeon */ | ||
106 | case swc2_op: /* This is bbit1 on Octeon */ | ||
107 | case sdc2_op: /* This is bbit132 on Octeon */ | ||
108 | #endif | ||
109 | return 1; | ||
110 | default: | ||
111 | break; | ||
112 | } | ||
113 | insn_ok: | ||
114 | return 0; | ||
115 | } | 60 | } |
116 | 61 | ||
117 | /* | 62 | /* |
@@ -161,6 +106,12 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) | |||
161 | goto out; | 106 | goto out; |
162 | } | 107 | } |
163 | 108 | ||
109 | if (__insn_is_compact_branch(insn)) { | ||
110 | pr_notice("Kprobes for compact branches are not supported\n"); | ||
111 | ret = -EINVAL; | ||
112 | goto out; | ||
113 | } | ||
114 | |||
164 | /* insn: must be on special executable page on mips. */ | 115 | /* insn: must be on special executable page on mips. */ |
165 | p->ainsn.insn = get_insn_slot(); | 116 | p->ainsn.insn = get_insn_slot(); |
166 | if (!p->ainsn.insn) { | 117 | if (!p->ainsn.insn) { |
diff --git a/arch/mips/kernel/linux32.c b/arch/mips/kernel/linux32.c index 0b29646bcee7..50fb62544df7 100644 --- a/arch/mips/kernel/linux32.c +++ b/arch/mips/kernel/linux32.c | |||
@@ -26,7 +26,6 @@ | |||
26 | #include <linux/utsname.h> | 26 | #include <linux/utsname.h> |
27 | #include <linux/personality.h> | 27 | #include <linux/personality.h> |
28 | #include <linux/dnotify.h> | 28 | #include <linux/dnotify.h> |
29 | #include <linux/module.h> | ||
30 | #include <linux/binfmts.h> | 29 | #include <linux/binfmts.h> |
31 | #include <linux/security.h> | 30 | #include <linux/security.h> |
32 | #include <linux/compat.h> | 31 | #include <linux/compat.h> |
diff --git a/arch/mips/kernel/mips-cpc.c b/arch/mips/kernel/mips-cpc.c index 566b8d2c092c..2a45867d3b4f 100644 --- a/arch/mips/kernel/mips-cpc.c +++ b/arch/mips/kernel/mips-cpc.c | |||
@@ -52,7 +52,7 @@ static phys_addr_t mips_cpc_phys_base(void) | |||
52 | int mips_cpc_probe(void) | 52 | int mips_cpc_probe(void) |
53 | { | 53 | { |
54 | phys_addr_t addr; | 54 | phys_addr_t addr; |
55 | unsigned cpu; | 55 | unsigned int cpu; |
56 | 56 | ||
57 | for_each_possible_cpu(cpu) | 57 | for_each_possible_cpu(cpu) |
58 | spin_lock_init(&per_cpu(cpc_core_lock, cpu)); | 58 | spin_lock_init(&per_cpu(cpc_core_lock, cpu)); |
@@ -70,7 +70,12 @@ int mips_cpc_probe(void) | |||
70 | 70 | ||
71 | void mips_cpc_lock_other(unsigned int core) | 71 | void mips_cpc_lock_other(unsigned int core) |
72 | { | 72 | { |
73 | unsigned curr_core; | 73 | unsigned int curr_core; |
74 | |||
75 | if (mips_cm_revision() >= CM_REV_CM3) | ||
76 | /* Systems with CM >= 3 lock the CPC via mips_cm_lock_other */ | ||
77 | return; | ||
78 | |||
74 | preempt_disable(); | 79 | preempt_disable(); |
75 | curr_core = current_cpu_data.core; | 80 | curr_core = current_cpu_data.core; |
76 | spin_lock_irqsave(&per_cpu(cpc_core_lock, curr_core), | 81 | spin_lock_irqsave(&per_cpu(cpc_core_lock, curr_core), |
@@ -86,7 +91,13 @@ void mips_cpc_lock_other(unsigned int core) | |||
86 | 91 | ||
87 | void mips_cpc_unlock_other(void) | 92 | void mips_cpc_unlock_other(void) |
88 | { | 93 | { |
89 | unsigned curr_core = current_cpu_data.core; | 94 | unsigned int curr_core; |
95 | |||
96 | if (mips_cm_revision() >= CM_REV_CM3) | ||
97 | /* Systems with CM >= 3 lock the CPC via mips_cm_lock_other */ | ||
98 | return; | ||
99 | |||
100 | curr_core = current_cpu_data.core; | ||
90 | spin_unlock_irqrestore(&per_cpu(cpc_core_lock, curr_core), | 101 | spin_unlock_irqrestore(&per_cpu(cpc_core_lock, curr_core), |
91 | per_cpu(cpc_core_lock_flags, curr_core)); | 102 | per_cpu(cpc_core_lock_flags, curr_core)); |
92 | preempt_enable(); | 103 | preempt_enable(); |
diff --git a/arch/mips/kernel/mips-r2-to-r6-emul.c b/arch/mips/kernel/mips-r2-to-r6-emul.c index 0a7e10b5f9e3..22dedd62818a 100644 --- a/arch/mips/kernel/mips-r2-to-r6-emul.c +++ b/arch/mips/kernel/mips-r2-to-r6-emul.c | |||
@@ -15,7 +15,6 @@ | |||
15 | #include <linux/debugfs.h> | 15 | #include <linux/debugfs.h> |
16 | #include <linux/init.h> | 16 | #include <linux/init.h> |
17 | #include <linux/kernel.h> | 17 | #include <linux/kernel.h> |
18 | #include <linux/module.h> | ||
19 | #include <linux/ptrace.h> | 18 | #include <linux/ptrace.h> |
20 | #include <linux/seq_file.h> | 19 | #include <linux/seq_file.h> |
21 | 20 | ||
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c index 79850e376ef6..94627a3a6a0d 100644 --- a/arch/mips/kernel/module.c +++ b/arch/mips/kernel/module.c | |||
@@ -20,6 +20,7 @@ | |||
20 | 20 | ||
21 | #undef DEBUG | 21 | #undef DEBUG |
22 | 22 | ||
23 | #include <linux/extable.h> | ||
23 | #include <linux/moduleloader.h> | 24 | #include <linux/moduleloader.h> |
24 | #include <linux/elf.h> | 25 | #include <linux/elf.h> |
25 | #include <linux/mm.h> | 26 | #include <linux/mm.h> |
diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c index 5b31a9405ebc..7cf653e21423 100644 --- a/arch/mips/kernel/pm-cps.c +++ b/arch/mips/kernel/pm-cps.c | |||
@@ -8,6 +8,7 @@ | |||
8 | * option) any later version. | 8 | * option) any later version. |
9 | */ | 9 | */ |
10 | 10 | ||
11 | #include <linux/cpuhotplug.h> | ||
11 | #include <linux/init.h> | 12 | #include <linux/init.h> |
12 | #include <linux/percpu.h> | 13 | #include <linux/percpu.h> |
13 | #include <linux/slab.h> | 14 | #include <linux/slab.h> |
@@ -70,13 +71,8 @@ static DEFINE_PER_CPU_ALIGNED(atomic_t, pm_barrier); | |||
70 | DEFINE_PER_CPU_ALIGNED(struct mips_static_suspend_state, cps_cpu_state); | 71 | DEFINE_PER_CPU_ALIGNED(struct mips_static_suspend_state, cps_cpu_state); |
71 | 72 | ||
72 | /* A somewhat arbitrary number of labels & relocs for uasm */ | 73 | /* A somewhat arbitrary number of labels & relocs for uasm */ |
73 | static struct uasm_label labels[32] __initdata; | 74 | static struct uasm_label labels[32]; |
74 | static struct uasm_reloc relocs[32] __initdata; | 75 | static struct uasm_reloc relocs[32]; |
75 | |||
76 | /* CPU dependant sync types */ | ||
77 | static unsigned stype_intervention; | ||
78 | static unsigned stype_memory; | ||
79 | static unsigned stype_ordering; | ||
80 | 76 | ||
81 | enum mips_reg { | 77 | enum mips_reg { |
82 | zero, at, v0, v1, a0, a1, a2, a3, | 78 | zero, at, v0, v1, a0, a1, a2, a3, |
@@ -134,7 +130,7 @@ int cps_pm_enter_state(enum cps_pm_state state) | |||
134 | return -EINVAL; | 130 | return -EINVAL; |
135 | 131 | ||
136 | /* Calculate which coupled CPUs (VPEs) are online */ | 132 | /* Calculate which coupled CPUs (VPEs) are online */ |
137 | #ifdef CONFIG_MIPS_MT | 133 | #if defined(CONFIG_MIPS_MT) || defined(CONFIG_CPU_MIPSR6) |
138 | if (cpu_online(cpu)) { | 134 | if (cpu_online(cpu)) { |
139 | cpumask_and(coupled_mask, cpu_online_mask, | 135 | cpumask_and(coupled_mask, cpu_online_mask, |
140 | &cpu_sibling_map[cpu]); | 136 | &cpu_sibling_map[cpu]); |
@@ -198,10 +194,10 @@ int cps_pm_enter_state(enum cps_pm_state state) | |||
198 | return 0; | 194 | return 0; |
199 | } | 195 | } |
200 | 196 | ||
201 | static void __init cps_gen_cache_routine(u32 **pp, struct uasm_label **pl, | 197 | static void cps_gen_cache_routine(u32 **pp, struct uasm_label **pl, |
202 | struct uasm_reloc **pr, | 198 | struct uasm_reloc **pr, |
203 | const struct cache_desc *cache, | 199 | const struct cache_desc *cache, |
204 | unsigned op, int lbl) | 200 | unsigned op, int lbl) |
205 | { | 201 | { |
206 | unsigned cache_size = cache->ways << cache->waybit; | 202 | unsigned cache_size = cache->ways << cache->waybit; |
207 | unsigned i; | 203 | unsigned i; |
@@ -242,10 +238,10 @@ static void __init cps_gen_cache_routine(u32 **pp, struct uasm_label **pl, | |||
242 | uasm_i_nop(pp); | 238 | uasm_i_nop(pp); |
243 | } | 239 | } |
244 | 240 | ||
245 | static int __init cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, | 241 | static int cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, |
246 | struct uasm_reloc **pr, | 242 | struct uasm_reloc **pr, |
247 | const struct cpuinfo_mips *cpu_info, | 243 | const struct cpuinfo_mips *cpu_info, |
248 | int lbl) | 244 | int lbl) |
249 | { | 245 | { |
250 | unsigned i, fsb_size = 8; | 246 | unsigned i, fsb_size = 8; |
251 | unsigned num_loads = (fsb_size * 3) / 2; | 247 | unsigned num_loads = (fsb_size * 3) / 2; |
@@ -272,14 +268,9 @@ static int __init cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, | |||
272 | /* On older ones it's unavailable */ | 268 | /* On older ones it's unavailable */ |
273 | return -1; | 269 | return -1; |
274 | 270 | ||
275 | /* CPUs which do not require the workaround */ | ||
276 | case CPU_P5600: | ||
277 | case CPU_I6400: | ||
278 | return 0; | ||
279 | |||
280 | default: | 271 | default: |
281 | WARN_ONCE(1, "pm-cps: FSB flush unsupported for this CPU\n"); | 272 | /* Assume that the CPU does not need this workaround */ |
282 | return -1; | 273 | return 0; |
283 | } | 274 | } |
284 | 275 | ||
285 | /* | 276 | /* |
@@ -320,8 +311,8 @@ static int __init cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, | |||
320 | i * line_size * line_stride, t0); | 311 | i * line_size * line_stride, t0); |
321 | } | 312 | } |
322 | 313 | ||
323 | /* Completion barrier */ | 314 | /* Barrier ensuring previous cache invalidates are complete */ |
324 | uasm_i_sync(pp, stype_memory); | 315 | uasm_i_sync(pp, STYPE_SYNC); |
325 | uasm_i_ehb(pp); | 316 | uasm_i_ehb(pp); |
326 | 317 | ||
327 | /* Check whether the pipeline stalled due to the FSB being full */ | 318 | /* Check whether the pipeline stalled due to the FSB being full */ |
@@ -340,9 +331,9 @@ static int __init cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, | |||
340 | return 0; | 331 | return 0; |
341 | } | 332 | } |
342 | 333 | ||
343 | static void __init cps_gen_set_top_bit(u32 **pp, struct uasm_label **pl, | 334 | static void cps_gen_set_top_bit(u32 **pp, struct uasm_label **pl, |
344 | struct uasm_reloc **pr, | 335 | struct uasm_reloc **pr, |
345 | unsigned r_addr, int lbl) | 336 | unsigned r_addr, int lbl) |
346 | { | 337 | { |
347 | uasm_i_lui(pp, t0, uasm_rel_hi(0x80000000)); | 338 | uasm_i_lui(pp, t0, uasm_rel_hi(0x80000000)); |
348 | uasm_build_label(pl, *pp, lbl); | 339 | uasm_build_label(pl, *pp, lbl); |
@@ -353,7 +344,7 @@ static void __init cps_gen_set_top_bit(u32 **pp, struct uasm_label **pl, | |||
353 | uasm_i_nop(pp); | 344 | uasm_i_nop(pp); |
354 | } | 345 | } |
355 | 346 | ||
356 | static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | 347 | static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) |
357 | { | 348 | { |
358 | struct uasm_label *l = labels; | 349 | struct uasm_label *l = labels; |
359 | struct uasm_reloc *r = relocs; | 350 | struct uasm_reloc *r = relocs; |
@@ -411,7 +402,7 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
411 | 402 | ||
412 | if (coupled_coherence) { | 403 | if (coupled_coherence) { |
413 | /* Increment ready_count */ | 404 | /* Increment ready_count */ |
414 | uasm_i_sync(&p, stype_ordering); | 405 | uasm_i_sync(&p, STYPE_SYNC_MB); |
415 | uasm_build_label(&l, p, lbl_incready); | 406 | uasm_build_label(&l, p, lbl_incready); |
416 | uasm_i_ll(&p, t1, 0, r_nc_count); | 407 | uasm_i_ll(&p, t1, 0, r_nc_count); |
417 | uasm_i_addiu(&p, t2, t1, 1); | 408 | uasm_i_addiu(&p, t2, t1, 1); |
@@ -419,8 +410,8 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
419 | uasm_il_beqz(&p, &r, t2, lbl_incready); | 410 | uasm_il_beqz(&p, &r, t2, lbl_incready); |
420 | uasm_i_addiu(&p, t1, t1, 1); | 411 | uasm_i_addiu(&p, t1, t1, 1); |
421 | 412 | ||
422 | /* Ordering barrier */ | 413 | /* Barrier ensuring all CPUs see the updated r_nc_count value */ |
423 | uasm_i_sync(&p, stype_ordering); | 414 | uasm_i_sync(&p, STYPE_SYNC_MB); |
424 | 415 | ||
425 | /* | 416 | /* |
426 | * If this is the last VPE to become ready for non-coherence | 417 | * If this is the last VPE to become ready for non-coherence |
@@ -441,7 +432,8 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
441 | uasm_i_lw(&p, t0, 0, r_nc_count); | 432 | uasm_i_lw(&p, t0, 0, r_nc_count); |
442 | uasm_il_bltz(&p, &r, t0, lbl_secondary_cont); | 433 | uasm_il_bltz(&p, &r, t0, lbl_secondary_cont); |
443 | uasm_i_ehb(&p); | 434 | uasm_i_ehb(&p); |
444 | uasm_i_yield(&p, zero, t1); | 435 | if (cpu_has_mipsmt) |
436 | uasm_i_yield(&p, zero, t1); | ||
445 | uasm_il_b(&p, &r, lbl_poll_cont); | 437 | uasm_il_b(&p, &r, lbl_poll_cont); |
446 | uasm_i_nop(&p); | 438 | uasm_i_nop(&p); |
447 | } else { | 439 | } else { |
@@ -449,8 +441,21 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
449 | * The core will lose power & this VPE will not continue | 441 | * The core will lose power & this VPE will not continue |
450 | * so it can simply halt here. | 442 | * so it can simply halt here. |
451 | */ | 443 | */ |
452 | uasm_i_addiu(&p, t0, zero, TCHALT_H); | 444 | if (cpu_has_mipsmt) { |
453 | uasm_i_mtc0(&p, t0, 2, 4); | 445 | /* Halt the VPE via C0 tchalt register */ |
446 | uasm_i_addiu(&p, t0, zero, TCHALT_H); | ||
447 | uasm_i_mtc0(&p, t0, 2, 4); | ||
448 | } else if (cpu_has_vp) { | ||
449 | /* Halt the VP via the CPC VP_STOP register */ | ||
450 | unsigned int vpe_id; | ||
451 | |||
452 | vpe_id = cpu_vpe_id(&cpu_data[cpu]); | ||
453 | uasm_i_addiu(&p, t0, zero, 1 << vpe_id); | ||
454 | UASM_i_LA(&p, t1, (long)addr_cpc_cl_vp_stop()); | ||
455 | uasm_i_sw(&p, t0, 0, t1); | ||
456 | } else { | ||
457 | BUG(); | ||
458 | } | ||
454 | uasm_build_label(&l, p, lbl_secondary_hang); | 459 | uasm_build_label(&l, p, lbl_secondary_hang); |
455 | uasm_il_b(&p, &r, lbl_secondary_hang); | 460 | uasm_il_b(&p, &r, lbl_secondary_hang); |
456 | uasm_i_nop(&p); | 461 | uasm_i_nop(&p); |
@@ -472,22 +477,24 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
472 | cps_gen_cache_routine(&p, &l, &r, &cpu_data[cpu].dcache, | 477 | cps_gen_cache_routine(&p, &l, &r, &cpu_data[cpu].dcache, |
473 | Index_Writeback_Inv_D, lbl_flushdcache); | 478 | Index_Writeback_Inv_D, lbl_flushdcache); |
474 | 479 | ||
475 | /* Completion barrier */ | 480 | /* Barrier ensuring previous cache invalidates are complete */ |
476 | uasm_i_sync(&p, stype_memory); | 481 | uasm_i_sync(&p, STYPE_SYNC); |
477 | uasm_i_ehb(&p); | 482 | uasm_i_ehb(&p); |
478 | 483 | ||
479 | /* | 484 | if (mips_cm_revision() < CM_REV_CM3) { |
480 | * Disable all but self interventions. The load from COHCTL is defined | 485 | /* |
481 | * by the interAptiv & proAptiv SUMs as ensuring that the operation | 486 | * Disable all but self interventions. The load from COHCTL is |
482 | * resulting from the preceding store is complete. | 487 | * defined by the interAptiv & proAptiv SUMs as ensuring that the |
483 | */ | 488 | * operation resulting from the preceding store is complete. |
484 | uasm_i_addiu(&p, t0, zero, 1 << cpu_data[cpu].core); | 489 | */ |
485 | uasm_i_sw(&p, t0, 0, r_pcohctl); | 490 | uasm_i_addiu(&p, t0, zero, 1 << cpu_data[cpu].core); |
486 | uasm_i_lw(&p, t0, 0, r_pcohctl); | 491 | uasm_i_sw(&p, t0, 0, r_pcohctl); |
487 | 492 | uasm_i_lw(&p, t0, 0, r_pcohctl); | |
488 | /* Sync to ensure previous interventions are complete */ | 493 | |
489 | uasm_i_sync(&p, stype_intervention); | 494 | /* Barrier to ensure write to coherence control is complete */ |
490 | uasm_i_ehb(&p); | 495 | uasm_i_sync(&p, STYPE_SYNC); |
496 | uasm_i_ehb(&p); | ||
497 | } | ||
491 | 498 | ||
492 | /* Disable coherence */ | 499 | /* Disable coherence */ |
493 | uasm_i_sw(&p, zero, 0, r_pcohctl); | 500 | uasm_i_sw(&p, zero, 0, r_pcohctl); |
@@ -531,8 +538,8 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
531 | goto gen_done; | 538 | goto gen_done; |
532 | } | 539 | } |
533 | 540 | ||
534 | /* Completion barrier */ | 541 | /* Barrier to ensure write to CPC command is complete */ |
535 | uasm_i_sync(&p, stype_memory); | 542 | uasm_i_sync(&p, STYPE_SYNC); |
536 | uasm_i_ehb(&p); | 543 | uasm_i_ehb(&p); |
537 | } | 544 | } |
538 | 545 | ||
@@ -562,26 +569,29 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
562 | * will run this. The first will actually re-enable coherence & the | 569 | * will run this. The first will actually re-enable coherence & the |
563 | * rest will just be performing a rather unusual nop. | 570 | * rest will just be performing a rather unusual nop. |
564 | */ | 571 | */ |
565 | uasm_i_addiu(&p, t0, zero, CM_GCR_Cx_COHERENCE_COHDOMAINEN_MSK); | 572 | uasm_i_addiu(&p, t0, zero, mips_cm_revision() < CM_REV_CM3 |
573 | ? CM_GCR_Cx_COHERENCE_COHDOMAINEN_MSK | ||
574 | : CM3_GCR_Cx_COHERENCE_COHEN_MSK); | ||
575 | |||
566 | uasm_i_sw(&p, t0, 0, r_pcohctl); | 576 | uasm_i_sw(&p, t0, 0, r_pcohctl); |
567 | uasm_i_lw(&p, t0, 0, r_pcohctl); | 577 | uasm_i_lw(&p, t0, 0, r_pcohctl); |
568 | 578 | ||
569 | /* Completion barrier */ | 579 | /* Barrier to ensure write to coherence control is complete */ |
570 | uasm_i_sync(&p, stype_memory); | 580 | uasm_i_sync(&p, STYPE_SYNC); |
571 | uasm_i_ehb(&p); | 581 | uasm_i_ehb(&p); |
572 | 582 | ||
573 | if (coupled_coherence && (state == CPS_PM_NC_WAIT)) { | 583 | if (coupled_coherence && (state == CPS_PM_NC_WAIT)) { |
574 | /* Decrement ready_count */ | 584 | /* Decrement ready_count */ |
575 | uasm_build_label(&l, p, lbl_decready); | 585 | uasm_build_label(&l, p, lbl_decready); |
576 | uasm_i_sync(&p, stype_ordering); | 586 | uasm_i_sync(&p, STYPE_SYNC_MB); |
577 | uasm_i_ll(&p, t1, 0, r_nc_count); | 587 | uasm_i_ll(&p, t1, 0, r_nc_count); |
578 | uasm_i_addiu(&p, t2, t1, -1); | 588 | uasm_i_addiu(&p, t2, t1, -1); |
579 | uasm_i_sc(&p, t2, 0, r_nc_count); | 589 | uasm_i_sc(&p, t2, 0, r_nc_count); |
580 | uasm_il_beqz(&p, &r, t2, lbl_decready); | 590 | uasm_il_beqz(&p, &r, t2, lbl_decready); |
581 | uasm_i_andi(&p, v0, t1, (1 << fls(smp_num_siblings)) - 1); | 591 | uasm_i_andi(&p, v0, t1, (1 << fls(smp_num_siblings)) - 1); |
582 | 592 | ||
583 | /* Ordering barrier */ | 593 | /* Barrier ensuring all CPUs see the updated r_nc_count value */ |
584 | uasm_i_sync(&p, stype_ordering); | 594 | uasm_i_sync(&p, STYPE_SYNC_MB); |
585 | } | 595 | } |
586 | 596 | ||
587 | if (coupled_coherence && (state == CPS_PM_CLOCK_GATED)) { | 597 | if (coupled_coherence && (state == CPS_PM_CLOCK_GATED)) { |
@@ -602,8 +612,8 @@ static void * __init cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) | |||
602 | */ | 612 | */ |
603 | uasm_build_label(&l, p, lbl_secondary_cont); | 613 | uasm_build_label(&l, p, lbl_secondary_cont); |
604 | 614 | ||
605 | /* Ordering barrier */ | 615 | /* Barrier ensuring all CPUs see the updated r_nc_count value */ |
606 | uasm_i_sync(&p, stype_ordering); | 616 | uasm_i_sync(&p, STYPE_SYNC_MB); |
607 | } | 617 | } |
608 | 618 | ||
609 | /* The core is coherent, time to return to C code */ | 619 | /* The core is coherent, time to return to C code */ |
@@ -628,7 +638,7 @@ out_err: | |||
628 | return NULL; | 638 | return NULL; |
629 | } | 639 | } |
630 | 640 | ||
631 | static int __init cps_gen_core_entries(unsigned cpu) | 641 | static int cps_pm_online_cpu(unsigned int cpu) |
632 | { | 642 | { |
633 | enum cps_pm_state state; | 643 | enum cps_pm_state state; |
634 | unsigned core = cpu_data[cpu].core; | 644 | unsigned core = cpu_data[cpu].core; |
@@ -670,29 +680,10 @@ static int __init cps_gen_core_entries(unsigned cpu) | |||
670 | 680 | ||
671 | static int __init cps_pm_init(void) | 681 | static int __init cps_pm_init(void) |
672 | { | 682 | { |
673 | unsigned cpu; | ||
674 | int err; | ||
675 | |||
676 | /* Detect appropriate sync types for the system */ | ||
677 | switch (current_cpu_data.cputype) { | ||
678 | case CPU_INTERAPTIV: | ||
679 | case CPU_PROAPTIV: | ||
680 | case CPU_M5150: | ||
681 | case CPU_P5600: | ||
682 | case CPU_I6400: | ||
683 | stype_intervention = 0x2; | ||
684 | stype_memory = 0x3; | ||
685 | stype_ordering = 0x10; | ||
686 | break; | ||
687 | |||
688 | default: | ||
689 | pr_warn("Power management is using heavyweight sync 0\n"); | ||
690 | } | ||
691 | |||
692 | /* A CM is required for all non-coherent states */ | 683 | /* A CM is required for all non-coherent states */ |
693 | if (!mips_cm_present()) { | 684 | if (!mips_cm_present()) { |
694 | pr_warn("pm-cps: no CM, non-coherent states unavailable\n"); | 685 | pr_warn("pm-cps: no CM, non-coherent states unavailable\n"); |
695 | goto out; | 686 | return 0; |
696 | } | 687 | } |
697 | 688 | ||
698 | /* | 689 | /* |
@@ -722,12 +713,7 @@ static int __init cps_pm_init(void) | |||
722 | pr_warn("pm-cps: no CPC, clock & power gating unavailable\n"); | 713 | pr_warn("pm-cps: no CPC, clock & power gating unavailable\n"); |
723 | } | 714 | } |
724 | 715 | ||
725 | for_each_present_cpu(cpu) { | 716 | return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "AP_PM_CPS_CPU_ONLINE", |
726 | err = cps_gen_core_entries(cpu); | 717 | cps_pm_online_cpu, NULL); |
727 | if (err) | ||
728 | return err; | ||
729 | } | ||
730 | out: | ||
731 | return 0; | ||
732 | } | 718 | } |
733 | arch_initcall(cps_pm_init); | 719 | arch_initcall(cps_pm_init); |
diff --git a/arch/mips/kernel/probes-common.h b/arch/mips/kernel/probes-common.h new file mode 100644 index 000000000000..dd08e41134b6 --- /dev/null +++ b/arch/mips/kernel/probes-common.h | |||
@@ -0,0 +1,83 @@ | |||
1 | /* | ||
2 | * Copyright (C) 2016 Imagination Technologies | ||
3 | * Author: Marcin Nowakowski <marcin.nowakowski@imgtec.com> | ||
4 | * | ||
5 | * This program is free software; you can redistribute it and/or modify it | ||
6 | * under the terms of the GNU General Public License as published by the | ||
7 | * Free Software Foundation; either version 2 of the License, or (at your | ||
8 | * option) any later version. | ||
9 | */ | ||
10 | |||
11 | #ifndef __PROBES_COMMON_H | ||
12 | #define __PROBES_COMMON_H | ||
13 | |||
14 | #include <asm/inst.h> | ||
15 | |||
16 | int __insn_is_compact_branch(union mips_instruction insn); | ||
17 | |||
18 | static inline int __insn_has_delay_slot(const union mips_instruction insn) | ||
19 | { | ||
20 | switch (insn.i_format.opcode) { | ||
21 | /* | ||
22 | * jr and jalr are in r_format format. | ||
23 | */ | ||
24 | case spec_op: | ||
25 | switch (insn.r_format.func) { | ||
26 | case jalr_op: | ||
27 | case jr_op: | ||
28 | return 1; | ||
29 | } | ||
30 | break; | ||
31 | |||
32 | /* | ||
33 | * This group contains: | ||
34 | * bltz_op, bgez_op, bltzl_op, bgezl_op, | ||
35 | * bltzal_op, bgezal_op, bltzall_op, bgezall_op. | ||
36 | */ | ||
37 | case bcond_op: | ||
38 | switch (insn.i_format.rt) { | ||
39 | case bltz_op: | ||
40 | case bltzl_op: | ||
41 | case bgez_op: | ||
42 | case bgezl_op: | ||
43 | case bltzal_op: | ||
44 | case bltzall_op: | ||
45 | case bgezal_op: | ||
46 | case bgezall_op: | ||
47 | case bposge32_op: | ||
48 | return 1; | ||
49 | } | ||
50 | break; | ||
51 | |||
52 | /* | ||
53 | * These are unconditional and in j_format. | ||
54 | */ | ||
55 | case jal_op: | ||
56 | case j_op: | ||
57 | case beq_op: | ||
58 | case beql_op: | ||
59 | case bne_op: | ||
60 | case bnel_op: | ||
61 | case blez_op: /* not really i_format */ | ||
62 | case blezl_op: | ||
63 | case bgtz_op: | ||
64 | case bgtzl_op: | ||
65 | return 1; | ||
66 | |||
67 | /* | ||
68 | * And now the FPA/cp1 branch instructions. | ||
69 | */ | ||
70 | case cop1_op: | ||
71 | #ifdef CONFIG_CPU_CAVIUM_OCTEON | ||
72 | case lwc2_op: /* This is bbit0 on Octeon */ | ||
73 | case ldc2_op: /* This is bbit032 on Octeon */ | ||
74 | case swc2_op: /* This is bbit1 on Octeon */ | ||
75 | case sdc2_op: /* This is bbit132 on Octeon */ | ||
76 | #endif | ||
77 | return 1; | ||
78 | } | ||
79 | |||
80 | return 0; | ||
81 | } | ||
82 | |||
83 | #endif /* __PROBES_COMMON_H */ | ||
diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c index 97dc01b03631..4eff2aed7360 100644 --- a/arch/mips/kernel/proc.c +++ b/arch/mips/kernel/proc.c | |||
@@ -135,6 +135,13 @@ static int show_cpuinfo(struct seq_file *m, void *v) | |||
135 | seq_printf(m, "package\t\t\t: %d\n", cpu_data[n].package); | 135 | seq_printf(m, "package\t\t\t: %d\n", cpu_data[n].package); |
136 | seq_printf(m, "core\t\t\t: %d\n", cpu_data[n].core); | 136 | seq_printf(m, "core\t\t\t: %d\n", cpu_data[n].core); |
137 | 137 | ||
138 | #if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6) | ||
139 | if (cpu_has_mipsmt) | ||
140 | seq_printf(m, "VPE\t\t\t: %d\n", cpu_data[n].vpe_id); | ||
141 | else if (cpu_has_vp) | ||
142 | seq_printf(m, "VP\t\t\t: %d\n", cpu_data[n].vpe_id); | ||
143 | #endif | ||
144 | |||
138 | sprintf(fmt, "VCE%%c exceptions\t\t: %s\n", | 145 | sprintf(fmt, "VCE%%c exceptions\t\t: %s\n", |
139 | cpu_has_vce ? "%u" : "not available"); | 146 | cpu_has_vce ? "%u" : "not available"); |
140 | seq_printf(m, fmt, 'D', vced_count); | 147 | seq_printf(m, fmt, 'D', vced_count); |
diff --git a/arch/mips/kernel/smp-gic.c b/arch/mips/kernel/smp-gic.c deleted file mode 100644 index 9b63829cf929..000000000000 --- a/arch/mips/kernel/smp-gic.c +++ /dev/null | |||
@@ -1,66 +0,0 @@ | |||
1 | /* | ||
2 | * Copyright (C) 2013 Imagination Technologies | ||
3 | * Author: Paul Burton <paul.burton@imgtec.com> | ||
4 | * | ||
5 | * Based on smp-cmp.c: | ||
6 | * Copyright (C) 2007 MIPS Technologies, Inc. | ||
7 | * Author: Chris Dearman (chris@mips.com) | ||
8 | * | ||
9 | * This program is free software; you can redistribute it and/or modify it | ||
10 | * under the terms of the GNU General Public License as published by the | ||
11 | * Free Software Foundation; either version 2 of the License, or (at your | ||
12 | * option) any later version. | ||
13 | */ | ||
14 | |||
15 | #include <linux/irqchip/mips-gic.h> | ||
16 | #include <linux/printk.h> | ||
17 | |||
18 | #include <asm/mips-cpc.h> | ||
19 | #include <asm/smp-ops.h> | ||
20 | |||
21 | void gic_send_ipi_single(int cpu, unsigned int action) | ||
22 | { | ||
23 | unsigned long flags; | ||
24 | unsigned int intr; | ||
25 | unsigned int core = cpu_data[cpu].core; | ||
26 | |||
27 | pr_debug("CPU%d: %s cpu %d action %u status %08x\n", | ||
28 | smp_processor_id(), __func__, cpu, action, read_c0_status()); | ||
29 | |||
30 | local_irq_save(flags); | ||
31 | |||
32 | switch (action) { | ||
33 | case SMP_CALL_FUNCTION: | ||
34 | intr = plat_ipi_call_int_xlate(cpu); | ||
35 | break; | ||
36 | |||
37 | case SMP_RESCHEDULE_YOURSELF: | ||
38 | intr = plat_ipi_resched_int_xlate(cpu); | ||
39 | break; | ||
40 | |||
41 | default: | ||
42 | BUG(); | ||
43 | } | ||
44 | |||
45 | gic_send_ipi(intr); | ||
46 | |||
47 | if (mips_cpc_present() && (core != current_cpu_data.core)) { | ||
48 | while (!cpumask_test_cpu(cpu, &cpu_coherent_mask)) { | ||
49 | mips_cm_lock_other(core, 0); | ||
50 | mips_cpc_lock_other(core); | ||
51 | write_cpc_co_cmd(CPC_Cx_CMD_PWRUP); | ||
52 | mips_cpc_unlock_other(); | ||
53 | mips_cm_unlock_other(); | ||
54 | } | ||
55 | } | ||
56 | |||
57 | local_irq_restore(flags); | ||
58 | } | ||
59 | |||
60 | void gic_send_ipi_mask(const struct cpumask *mask, unsigned int action) | ||
61 | { | ||
62 | unsigned int i; | ||
63 | |||
64 | for_each_cpu(i, mask) | ||
65 | gic_send_ipi_single(i, action); | ||
66 | } | ||
diff --git a/arch/mips/kernel/smp-mt.c b/arch/mips/kernel/smp-mt.c index 4f9570a57e8d..e077ea3e11fb 100644 --- a/arch/mips/kernel/smp-mt.c +++ b/arch/mips/kernel/smp-mt.c | |||
@@ -289,26 +289,3 @@ struct plat_smp_ops vsmp_smp_ops = { | |||
289 | .prepare_cpus = vsmp_prepare_cpus, | 289 | .prepare_cpus = vsmp_prepare_cpus, |
290 | }; | 290 | }; |
291 | 291 | ||
292 | #ifdef CONFIG_PROC_FS | ||
293 | static int proc_cpuinfo_chain_call(struct notifier_block *nfb, | ||
294 | unsigned long action_unused, void *data) | ||
295 | { | ||
296 | struct proc_cpuinfo_notifier_args *pcn = data; | ||
297 | struct seq_file *m = pcn->m; | ||
298 | unsigned long n = pcn->n; | ||
299 | |||
300 | if (!cpu_has_mipsmt) | ||
301 | return NOTIFY_OK; | ||
302 | |||
303 | seq_printf(m, "VPE\t\t\t: %d\n", cpu_data[n].vpe_id); | ||
304 | |||
305 | return NOTIFY_OK; | ||
306 | } | ||
307 | |||
308 | static int __init proc_cpuinfo_notifier_init(void) | ||
309 | { | ||
310 | return proc_cpuinfo_notifier(proc_cpuinfo_chain_call, 0); | ||
311 | } | ||
312 | |||
313 | subsys_initcall(proc_cpuinfo_notifier_init); | ||
314 | #endif | ||
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c index b0baf48951fa..7ebb1918e2ac 100644 --- a/arch/mips/kernel/smp.c +++ b/arch/mips/kernel/smp.c | |||
@@ -25,7 +25,7 @@ | |||
25 | #include <linux/smp.h> | 25 | #include <linux/smp.h> |
26 | #include <linux/spinlock.h> | 26 | #include <linux/spinlock.h> |
27 | #include <linux/threads.h> | 27 | #include <linux/threads.h> |
28 | #include <linux/module.h> | 28 | #include <linux/export.h> |
29 | #include <linux/time.h> | 29 | #include <linux/time.h> |
30 | #include <linux/timex.h> | 30 | #include <linux/timex.h> |
31 | #include <linux/sched.h> | 31 | #include <linux/sched.h> |
@@ -192,9 +192,11 @@ void mips_smp_send_ipi_mask(const struct cpumask *mask, unsigned int action) | |||
192 | continue; | 192 | continue; |
193 | 193 | ||
194 | while (!cpumask_test_cpu(cpu, &cpu_coherent_mask)) { | 194 | while (!cpumask_test_cpu(cpu, &cpu_coherent_mask)) { |
195 | mips_cm_lock_other(core, 0); | ||
195 | mips_cpc_lock_other(core); | 196 | mips_cpc_lock_other(core); |
196 | write_cpc_co_cmd(CPC_Cx_CMD_PWRUP); | 197 | write_cpc_co_cmd(CPC_Cx_CMD_PWRUP); |
197 | mips_cpc_unlock_other(); | 198 | mips_cpc_unlock_other(); |
199 | mips_cm_unlock_other(); | ||
198 | } | 200 | } |
199 | } | 201 | } |
200 | } | 202 | } |
@@ -229,7 +231,7 @@ static struct irqaction irq_call = { | |||
229 | .name = "IPI call" | 231 | .name = "IPI call" |
230 | }; | 232 | }; |
231 | 233 | ||
232 | static __init void smp_ipi_init_one(unsigned int virq, | 234 | static void smp_ipi_init_one(unsigned int virq, |
233 | struct irqaction *action) | 235 | struct irqaction *action) |
234 | { | 236 | { |
235 | int ret; | 237 | int ret; |
@@ -239,9 +241,11 @@ static __init void smp_ipi_init_one(unsigned int virq, | |||
239 | BUG_ON(ret); | 241 | BUG_ON(ret); |
240 | } | 242 | } |
241 | 243 | ||
242 | static int __init mips_smp_ipi_init(void) | 244 | static unsigned int call_virq, sched_virq; |
245 | |||
246 | int mips_smp_ipi_allocate(const struct cpumask *mask) | ||
243 | { | 247 | { |
244 | unsigned int call_virq, sched_virq; | 248 | int virq; |
245 | struct irq_domain *ipidomain; | 249 | struct irq_domain *ipidomain; |
246 | struct device_node *node; | 250 | struct device_node *node; |
247 | 251 | ||
@@ -268,16 +272,20 @@ static int __init mips_smp_ipi_init(void) | |||
268 | if (!ipidomain) | 272 | if (!ipidomain) |
269 | return 0; | 273 | return 0; |
270 | 274 | ||
271 | call_virq = irq_reserve_ipi(ipidomain, cpu_possible_mask); | 275 | virq = irq_reserve_ipi(ipidomain, mask); |
272 | BUG_ON(!call_virq); | 276 | BUG_ON(!virq); |
277 | if (!call_virq) | ||
278 | call_virq = virq; | ||
273 | 279 | ||
274 | sched_virq = irq_reserve_ipi(ipidomain, cpu_possible_mask); | 280 | virq = irq_reserve_ipi(ipidomain, mask); |
275 | BUG_ON(!sched_virq); | 281 | BUG_ON(!virq); |
282 | if (!sched_virq) | ||
283 | sched_virq = virq; | ||
276 | 284 | ||
277 | if (irq_domain_is_ipi_per_cpu(ipidomain)) { | 285 | if (irq_domain_is_ipi_per_cpu(ipidomain)) { |
278 | int cpu; | 286 | int cpu; |
279 | 287 | ||
280 | for_each_cpu(cpu, cpu_possible_mask) { | 288 | for_each_cpu(cpu, mask) { |
281 | smp_ipi_init_one(call_virq + cpu, &irq_call); | 289 | smp_ipi_init_one(call_virq + cpu, &irq_call); |
282 | smp_ipi_init_one(sched_virq + cpu, &irq_resched); | 290 | smp_ipi_init_one(sched_virq + cpu, &irq_resched); |
283 | } | 291 | } |
@@ -286,6 +294,45 @@ static int __init mips_smp_ipi_init(void) | |||
286 | smp_ipi_init_one(sched_virq, &irq_resched); | 294 | smp_ipi_init_one(sched_virq, &irq_resched); |
287 | } | 295 | } |
288 | 296 | ||
297 | return 0; | ||
298 | } | ||
299 | |||
300 | int mips_smp_ipi_free(const struct cpumask *mask) | ||
301 | { | ||
302 | struct irq_domain *ipidomain; | ||
303 | struct device_node *node; | ||
304 | |||
305 | node = of_irq_find_parent(of_root); | ||
306 | ipidomain = irq_find_matching_host(node, DOMAIN_BUS_IPI); | ||
307 | |||
308 | /* | ||
309 | * Some platforms have half DT setup. So if we found irq node but | ||
310 | * didn't find an ipidomain, try to search for one that is not in the | ||
311 | * DT. | ||
312 | */ | ||
313 | if (node && !ipidomain) | ||
314 | ipidomain = irq_find_matching_host(NULL, DOMAIN_BUS_IPI); | ||
315 | |||
316 | BUG_ON(!ipidomain); | ||
317 | |||
318 | if (irq_domain_is_ipi_per_cpu(ipidomain)) { | ||
319 | int cpu; | ||
320 | |||
321 | for_each_cpu(cpu, mask) { | ||
322 | remove_irq(call_virq + cpu, &irq_call); | ||
323 | remove_irq(sched_virq + cpu, &irq_resched); | ||
324 | } | ||
325 | } | ||
326 | irq_destroy_ipi(call_virq, mask); | ||
327 | irq_destroy_ipi(sched_virq, mask); | ||
328 | return 0; | ||
329 | } | ||
330 | |||
331 | |||
332 | static int __init mips_smp_ipi_init(void) | ||
333 | { | ||
334 | mips_smp_ipi_allocate(cpu_possible_mask); | ||
335 | |||
289 | call_desc = irq_to_desc(call_virq); | 336 | call_desc = irq_to_desc(call_virq); |
290 | sched_desc = irq_to_desc(sched_virq); | 337 | sched_desc = irq_to_desc(sched_virq); |
291 | 338 | ||
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c index 3de85be2486a..1f5fdee1dfc3 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c | |||
@@ -21,6 +21,7 @@ | |||
21 | #include <linux/init.h> | 21 | #include <linux/init.h> |
22 | #include <linux/kernel.h> | 22 | #include <linux/kernel.h> |
23 | #include <linux/module.h> | 23 | #include <linux/module.h> |
24 | #include <linux/extable.h> | ||
24 | #include <linux/mm.h> | 25 | #include <linux/mm.h> |
25 | #include <linux/sched.h> | 26 | #include <linux/sched.h> |
26 | #include <linux/smp.h> | 27 | #include <linux/smp.h> |
@@ -48,6 +49,7 @@ | |||
48 | #include <asm/fpu.h> | 49 | #include <asm/fpu.h> |
49 | #include <asm/fpu_emulator.h> | 50 | #include <asm/fpu_emulator.h> |
50 | #include <asm/idle.h> | 51 | #include <asm/idle.h> |
52 | #include <asm/mips-cm.h> | ||
51 | #include <asm/mips-r2-to-r6-emul.h> | 53 | #include <asm/mips-r2-to-r6-emul.h> |
52 | #include <asm/mipsregs.h> | 54 | #include <asm/mipsregs.h> |
53 | #include <asm/mipsmtregs.h> | 55 | #include <asm/mipsmtregs.h> |
@@ -444,6 +446,8 @@ asmlinkage void do_be(struct pt_regs *regs) | |||
444 | 446 | ||
445 | if (board_be_handler) | 447 | if (board_be_handler) |
446 | action = board_be_handler(regs, fixup != NULL); | 448 | action = board_be_handler(regs, fixup != NULL); |
449 | else | ||
450 | mips_cm_error_report(); | ||
447 | 451 | ||
448 | switch (action) { | 452 | switch (action) { |
449 | case MIPS_BE_DISCARD: | 453 | case MIPS_BE_DISCARD: |
@@ -2091,6 +2095,14 @@ static void configure_exception_vector(void) | |||
2091 | { | 2095 | { |
2092 | if (cpu_has_veic || cpu_has_vint) { | 2096 | if (cpu_has_veic || cpu_has_vint) { |
2093 | unsigned long sr = set_c0_status(ST0_BEV); | 2097 | unsigned long sr = set_c0_status(ST0_BEV); |
2098 | /* If available, use WG to set top bits of EBASE */ | ||
2099 | if (cpu_has_ebase_wg) { | ||
2100 | #ifdef CONFIG_64BIT | ||
2101 | write_c0_ebase_64(ebase | MIPS_EBASE_WG); | ||
2102 | #else | ||
2103 | write_c0_ebase(ebase | MIPS_EBASE_WG); | ||
2104 | #endif | ||
2105 | } | ||
2094 | write_c0_ebase(ebase); | 2106 | write_c0_ebase(ebase); |
2095 | write_c0_status(sr); | 2107 | write_c0_status(sr); |
2096 | /* Setting vector spacing enables EI/VI mode */ | 2108 | /* Setting vector spacing enables EI/VI mode */ |
@@ -2127,8 +2139,17 @@ void per_cpu_trap_init(bool is_boot_cpu) | |||
2127 | * We shouldn't trust a secondary core has a sane EBASE register | 2139 | * We shouldn't trust a secondary core has a sane EBASE register |
2128 | * so use the one calculated by the boot CPU. | 2140 | * so use the one calculated by the boot CPU. |
2129 | */ | 2141 | */ |
2130 | if (!is_boot_cpu) | 2142 | if (!is_boot_cpu) { |
2143 | /* If available, use WG to set top bits of EBASE */ | ||
2144 | if (cpu_has_ebase_wg) { | ||
2145 | #ifdef CONFIG_64BIT | ||
2146 | write_c0_ebase_64(ebase | MIPS_EBASE_WG); | ||
2147 | #else | ||
2148 | write_c0_ebase(ebase | MIPS_EBASE_WG); | ||
2149 | #endif | ||
2150 | } | ||
2131 | write_c0_ebase(ebase); | 2151 | write_c0_ebase(ebase); |
2152 | } | ||
2132 | 2153 | ||
2133 | cp0_compare_irq_shift = CAUSEB_TI - CAUSEB_IP; | 2154 | cp0_compare_irq_shift = CAUSEB_TI - CAUSEB_IP; |
2134 | cp0_compare_irq = (read_c0_intctl() >> INTCTLB_IPTI) & 7; | 2155 | cp0_compare_irq = (read_c0_intctl() >> INTCTLB_IPTI) & 7; |
@@ -2209,13 +2230,39 @@ void __init trap_init(void) | |||
2209 | 2230 | ||
2210 | if (cpu_has_veic || cpu_has_vint) { | 2231 | if (cpu_has_veic || cpu_has_vint) { |
2211 | unsigned long size = 0x200 + VECTORSPACING*64; | 2232 | unsigned long size = 0x200 + VECTORSPACING*64; |
2233 | phys_addr_t ebase_pa; | ||
2234 | |||
2212 | ebase = (unsigned long) | 2235 | ebase = (unsigned long) |
2213 | __alloc_bootmem(size, 1 << fls(size), 0); | 2236 | __alloc_bootmem(size, 1 << fls(size), 0); |
2237 | |||
2238 | /* | ||
2239 | * Try to ensure ebase resides in KSeg0 if possible. | ||
2240 | * | ||
2241 | * It shouldn't generally be in XKPhys on MIPS64 to avoid | ||
2242 | * hitting a poorly defined exception base for Cache Errors. | ||
2243 | * The allocation is likely to be in the low 512MB of physical, | ||
2244 | * in which case we should be able to convert to KSeg0. | ||
2245 | * | ||
2246 | * EVA is special though as it allows segments to be rearranged | ||
2247 | * and to become uncached during cache error handling. | ||
2248 | */ | ||
2249 | ebase_pa = __pa(ebase); | ||
2250 | if (!IS_ENABLED(CONFIG_EVA) && !WARN_ON(ebase_pa >= 0x20000000)) | ||
2251 | ebase = CKSEG0ADDR(ebase_pa); | ||
2214 | } else { | 2252 | } else { |
2215 | ebase = CAC_BASE; | 2253 | ebase = CAC_BASE; |
2216 | 2254 | ||
2217 | if (cpu_has_mips_r2_r6) | 2255 | if (cpu_has_mips_r2_r6) { |
2218 | ebase += (read_c0_ebase() & 0x3ffff000); | 2256 | if (cpu_has_ebase_wg) { |
2257 | #ifdef CONFIG_64BIT | ||
2258 | ebase = (read_c0_ebase_64() & ~0xfff); | ||
2259 | #else | ||
2260 | ebase = (read_c0_ebase() & ~0xfff); | ||
2261 | #endif | ||
2262 | } else { | ||
2263 | ebase += (read_c0_ebase() & 0x3ffff000); | ||
2264 | } | ||
2265 | } | ||
2219 | } | 2266 | } |
2220 | 2267 | ||
2221 | if (cpu_has_mmips) { | 2268 | if (cpu_has_mmips) { |
diff --git a/arch/mips/kernel/uprobes.c b/arch/mips/kernel/uprobes.c index 4c7c1558944a..dbb917403131 100644 --- a/arch/mips/kernel/uprobes.c +++ b/arch/mips/kernel/uprobes.c | |||
@@ -8,71 +8,12 @@ | |||
8 | #include <asm/branch.h> | 8 | #include <asm/branch.h> |
9 | #include <asm/cpu-features.h> | 9 | #include <asm/cpu-features.h> |
10 | #include <asm/ptrace.h> | 10 | #include <asm/ptrace.h> |
11 | #include <asm/inst.h> | 11 | |
12 | #include "probes-common.h" | ||
12 | 13 | ||
13 | static inline int insn_has_delay_slot(const union mips_instruction insn) | 14 | static inline int insn_has_delay_slot(const union mips_instruction insn) |
14 | { | 15 | { |
15 | switch (insn.i_format.opcode) { | 16 | return __insn_has_delay_slot(insn); |
16 | /* | ||
17 | * jr and jalr are in r_format format. | ||
18 | */ | ||
19 | case spec_op: | ||
20 | switch (insn.r_format.func) { | ||
21 | case jalr_op: | ||
22 | case jr_op: | ||
23 | return 1; | ||
24 | } | ||
25 | break; | ||
26 | |||
27 | /* | ||
28 | * This group contains: | ||
29 | * bltz_op, bgez_op, bltzl_op, bgezl_op, | ||
30 | * bltzal_op, bgezal_op, bltzall_op, bgezall_op. | ||
31 | */ | ||
32 | case bcond_op: | ||
33 | switch (insn.i_format.rt) { | ||
34 | case bltz_op: | ||
35 | case bltzl_op: | ||
36 | case bgez_op: | ||
37 | case bgezl_op: | ||
38 | case bltzal_op: | ||
39 | case bltzall_op: | ||
40 | case bgezal_op: | ||
41 | case bgezall_op: | ||
42 | case bposge32_op: | ||
43 | return 1; | ||
44 | } | ||
45 | break; | ||
46 | |||
47 | /* | ||
48 | * These are unconditional and in j_format. | ||
49 | */ | ||
50 | case jal_op: | ||
51 | case j_op: | ||
52 | case beq_op: | ||
53 | case beql_op: | ||
54 | case bne_op: | ||
55 | case bnel_op: | ||
56 | case blez_op: /* not really i_format */ | ||
57 | case blezl_op: | ||
58 | case bgtz_op: | ||
59 | case bgtzl_op: | ||
60 | return 1; | ||
61 | |||
62 | /* | ||
63 | * And now the FPA/cp1 branch instructions. | ||
64 | */ | ||
65 | case cop1_op: | ||
66 | #ifdef CONFIG_CPU_CAVIUM_OCTEON | ||
67 | case lwc2_op: /* This is bbit0 on Octeon */ | ||
68 | case ldc2_op: /* This is bbit032 on Octeon */ | ||
69 | case swc2_op: /* This is bbit1 on Octeon */ | ||
70 | case sdc2_op: /* This is bbit132 on Octeon */ | ||
71 | #endif | ||
72 | return 1; | ||
73 | } | ||
74 | |||
75 | return 0; | ||
76 | } | 17 | } |
77 | 18 | ||
78 | /** | 19 | /** |
@@ -95,6 +36,12 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *aup, | |||
95 | return -EINVAL; | 36 | return -EINVAL; |
96 | 37 | ||
97 | inst.word = aup->insn[0]; | 38 | inst.word = aup->insn[0]; |
39 | |||
40 | if (__insn_is_compact_branch(inst)) { | ||
41 | pr_notice("Uprobes for compact branches are not supported\n"); | ||
42 | return -EINVAL; | ||
43 | } | ||
44 | |||
98 | aup->ixol[0] = aup->insn[insn_has_delay_slot(inst)]; | 45 | aup->ixol[0] = aup->insn[insn_has_delay_slot(inst)]; |
99 | aup->ixol[1] = UPROBE_BRK_UPROBE_XOL; /* NOP */ | 46 | aup->ixol[1] = UPROBE_BRK_UPROBE_XOL; /* NOP */ |
100 | 47 | ||
@@ -282,19 +229,14 @@ int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, | |||
282 | void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, | 229 | void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, |
283 | void *src, unsigned long len) | 230 | void *src, unsigned long len) |
284 | { | 231 | { |
285 | void *kaddr; | 232 | unsigned long kaddr, kstart; |
286 | 233 | ||
287 | /* Initialize the slot */ | 234 | /* Initialize the slot */ |
288 | kaddr = kmap_atomic(page); | 235 | kaddr = (unsigned long)kmap_atomic(page); |
289 | memcpy(kaddr + (vaddr & ~PAGE_MASK), src, len); | 236 | kstart = kaddr + (vaddr & ~PAGE_MASK); |
290 | kunmap_atomic(kaddr); | 237 | memcpy((void *)kstart, src, len); |
291 | 238 | flush_icache_range(kstart, kstart + len); | |
292 | /* | 239 | kunmap_atomic((void *)kaddr); |
293 | * The MIPS version of flush_icache_range will operate safely on | ||
294 | * user space addresses and more importantly, it doesn't require a | ||
295 | * VMA argument. | ||
296 | */ | ||
297 | flush_icache_range(vaddr, vaddr + len); | ||
298 | } | 240 | } |
299 | 241 | ||
300 | /** | 242 | /** |