aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2011-07-24 13:20:54 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2011-07-24 13:20:54 -0400
commitb6844e8f64920cdee620157252169ba63afb0c89 (patch)
tree339a447f4d1b6b2a447d10d24de227ddfbd4cc65 /Documentation
parent2f175074e6811974ee77ddeb026f4d21aa3eca4d (diff)
parent3ad55155b222f2a901405dea20ff7c68828ecd92 (diff)
Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (237 commits) ARM: 7004/1: fix traps.h compile warnings ARM: 6998/2: kernel: use proper memory barriers for bitops ARM: 6997/1: ep93xx: increase NR_BANKS to 16 for support of 128MB RAM ARM: Fix build errors caused by adding generic macros ARM: CPU hotplug: ensure we migrate all IRQs off a downed CPU ARM: CPU hotplug: pass in proper affinity mask on IRQ migration ARM: GIC: avoid routing interrupts to offline CPUs ARM: CPU hotplug: fix abuse of irqdesc->node ARM: 6981/2: mmci: adjust calculation of f_min ARM: 7000/1: LPAE: Use long long printk format for displaying the pud ARM: 6999/1: head, zImage: Always Enter the kernel in ARM state ARM: btc: avoid invalidating the branch target cache on kernel TLB maintanence ARM: ARM_DMA_ZONE_SIZE is no more ARM: mach-shark: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size ARM: mach-sa1100: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size ARM: mach-realview: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size ARM: mach-pxa: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size ARM: mach-ixp4xx: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size ARM: mach-h720x: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size ARM: mach-davinci: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size ...
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/arm/Booting5
-rw-r--r--Documentation/arm/SH-Mobile/zboot-rom-sdhi.txt42
-rw-r--r--Documentation/arm/kernel_user_helpers.txt267
-rw-r--r--Documentation/devicetree/bindings/arm/pmu.txt21
4 files changed, 335 insertions, 0 deletions
diff --git a/Documentation/arm/Booting b/Documentation/arm/Booting
index 4e686a2ed91e..a341d87d276e 100644
--- a/Documentation/arm/Booting
+++ b/Documentation/arm/Booting
@@ -164,3 +164,8 @@ In either case, the following conditions must be met:
164- The boot loader is expected to call the kernel image by jumping 164- The boot loader is expected to call the kernel image by jumping
165 directly to the first instruction of the kernel image. 165 directly to the first instruction of the kernel image.
166 166
167 On CPUs supporting the ARM instruction set, the entry must be
168 made in ARM state, even for a Thumb-2 kernel.
169
170 On CPUs supporting only the Thumb instruction set such as
171 Cortex-M class CPUs, the entry must be made in Thumb state.
diff --git a/Documentation/arm/SH-Mobile/zboot-rom-sdhi.txt b/Documentation/arm/SH-Mobile/zboot-rom-sdhi.txt
new file mode 100644
index 000000000000..441959846e1a
--- /dev/null
+++ b/Documentation/arm/SH-Mobile/zboot-rom-sdhi.txt
@@ -0,0 +1,42 @@
1ROM-able zImage boot from eSD
2-----------------------------
3
4An ROM-able zImage compiled with ZBOOT_ROM_SDHI may be written to eSD and
5SuperH Mobile ARM will to boot directly from the SDHI hardware block.
6
7This is achieved by the mask ROM loading the first portion of the image into
8MERAM and then jumping to it. This portion contains loader code which
9copies the entire image to SDRAM and jumps to it. From there the zImage
10boot code proceeds as normal, uncompressing the image into its final
11location and then jumping to it.
12
13This code has been tested on an mackerel board using the developer 1A eSD
14boot mode which is configured using the following jumper settings.
15
16 8 7 6 5 4 3 2 1
17 x|x|x|x| |x|x|
18S4 -+-+-+-+-+-+-+-
19 | | | |x| | |x on
20
21The eSD card needs to be present in SDHI slot 1 (CN7).
22As such S1 and S33 also need to be configured as per
23the notes in arch/arm/mach-shmobile/board-mackerel.c.
24
25A partial zImage must be written to physical partition #1 (boot)
26of the eSD at sector 0 in vrl4 format. A utility vrl4 is supplied to
27accomplish this.
28
29e.g.
30 vrl4 < zImage | dd of=/dev/sdX bs=512 count=17
31
32A full copy of _the same_ zImage should be written to physical partition #1
33(boot) of the eSD at sector 0. This should _not_ be in vrl4 format.
34
35 vrl4 < zImage | dd of=/dev/sdX bs=512
36
37Note: The commands above assume that the physical partition has been
38switched. No such facility currently exists in the Linux Kernel.
39
40Physical partitions are described in the eSD specification. At the time of
41writing they are not the same as partitions that are typically configured
42using fdisk and visible through /proc/partitions
diff --git a/Documentation/arm/kernel_user_helpers.txt b/Documentation/arm/kernel_user_helpers.txt
new file mode 100644
index 000000000000..a17df9f91d16
--- /dev/null
+++ b/Documentation/arm/kernel_user_helpers.txt
@@ -0,0 +1,267 @@
1Kernel-provided User Helpers
2============================
3
4These are segment of kernel provided user code reachable from user space
5at a fixed address in kernel memory. This is used to provide user space
6with some operations which require kernel help because of unimplemented
7native feature and/or instructions in many ARM CPUs. The idea is for this
8code to be executed directly in user mode for best efficiency but which is
9too intimate with the kernel counter part to be left to user libraries.
10In fact this code might even differ from one CPU to another depending on
11the available instruction set, or whether it is a SMP systems. In other
12words, the kernel reserves the right to change this code as needed without
13warning. Only the entry points and their results as documented here are
14guaranteed to be stable.
15
16This is different from (but doesn't preclude) a full blown VDSO
17implementation, however a VDSO would prevent some assembly tricks with
18constants that allows for efficient branching to those code segments. And
19since those code segments only use a few cycles before returning to user
20code, the overhead of a VDSO indirect far call would add a measurable
21overhead to such minimalistic operations.
22
23User space is expected to bypass those helpers and implement those things
24inline (either in the code emitted directly by the compiler, or part of
25the implementation of a library call) when optimizing for a recent enough
26processor that has the necessary native support, but only if resulting
27binaries are already to be incompatible with earlier ARM processors due to
28useage of similar native instructions for other things. In other words
29don't make binaries unable to run on earlier processors just for the sake
30of not using these kernel helpers if your compiled code is not going to
31use new instructions for other purpose.
32
33New helpers may be added over time, so an older kernel may be missing some
34helpers present in a newer kernel. For this reason, programs must check
35the value of __kuser_helper_version (see below) before assuming that it is
36safe to call any particular helper. This check should ideally be
37performed only once at process startup time, and execution aborted early
38if the required helpers are not provided by the kernel version that
39process is running on.
40
41kuser_helper_version
42--------------------
43
44Location: 0xffff0ffc
45
46Reference declaration:
47
48 extern int32_t __kuser_helper_version;
49
50Definition:
51
52 This field contains the number of helpers being implemented by the
53 running kernel. User space may read this to determine the availability
54 of a particular helper.
55
56Usage example:
57
58#define __kuser_helper_version (*(int32_t *)0xffff0ffc)
59
60void check_kuser_version(void)
61{
62 if (__kuser_helper_version < 2) {
63 fprintf(stderr, "can't do atomic operations, kernel too old\n");
64 abort();
65 }
66}
67
68Notes:
69
70 User space may assume that the value of this field never changes
71 during the lifetime of any single process. This means that this
72 field can be read once during the initialisation of a library or
73 startup phase of a program.
74
75kuser_get_tls
76-------------
77
78Location: 0xffff0fe0
79
80Reference prototype:
81
82 void * __kuser_get_tls(void);
83
84Input:
85
86 lr = return address
87
88Output:
89
90 r0 = TLS value
91
92Clobbered registers:
93
94 none
95
96Definition:
97
98 Get the TLS value as previously set via the __ARM_NR_set_tls syscall.
99
100Usage example:
101
102typedef void * (__kuser_get_tls_t)(void);
103#define __kuser_get_tls (*(__kuser_get_tls_t *)0xffff0fe0)
104
105void foo()
106{
107 void *tls = __kuser_get_tls();
108 printf("TLS = %p\n", tls);
109}
110
111Notes:
112
113 - Valid only if __kuser_helper_version >= 1 (from kernel version 2.6.12).
114
115kuser_cmpxchg
116-------------
117
118Location: 0xffff0fc0
119
120Reference prototype:
121
122 int __kuser_cmpxchg(int32_t oldval, int32_t newval, volatile int32_t *ptr);
123
124Input:
125
126 r0 = oldval
127 r1 = newval
128 r2 = ptr
129 lr = return address
130
131Output:
132
133 r0 = success code (zero or non-zero)
134 C flag = set if r0 == 0, clear if r0 != 0
135
136Clobbered registers:
137
138 r3, ip, flags
139
140Definition:
141
142 Atomically store newval in *ptr only if *ptr is equal to oldval.
143 Return zero if *ptr was changed or non-zero if no exchange happened.
144 The C flag is also set if *ptr was changed to allow for assembly
145 optimization in the calling code.
146
147Usage example:
148
149typedef int (__kuser_cmpxchg_t)(int oldval, int newval, volatile int *ptr);
150#define __kuser_cmpxchg (*(__kuser_cmpxchg_t *)0xffff0fc0)
151
152int atomic_add(volatile int *ptr, int val)
153{
154 int old, new;
155
156 do {
157 old = *ptr;
158 new = old + val;
159 } while(__kuser_cmpxchg(old, new, ptr));
160
161 return new;
162}
163
164Notes:
165
166 - This routine already includes memory barriers as needed.
167
168 - Valid only if __kuser_helper_version >= 2 (from kernel version 2.6.12).
169
170kuser_memory_barrier
171--------------------
172
173Location: 0xffff0fa0
174
175Reference prototype:
176
177 void __kuser_memory_barrier(void);
178
179Input:
180
181 lr = return address
182
183Output:
184
185 none
186
187Clobbered registers:
188
189 none
190
191Definition:
192
193 Apply any needed memory barrier to preserve consistency with data modified
194 manually and __kuser_cmpxchg usage.
195
196Usage example:
197
198typedef void (__kuser_dmb_t)(void);
199#define __kuser_dmb (*(__kuser_dmb_t *)0xffff0fa0)
200
201Notes:
202
203 - Valid only if __kuser_helper_version >= 3 (from kernel version 2.6.15).
204
205kuser_cmpxchg64
206---------------
207
208Location: 0xffff0f60
209
210Reference prototype:
211
212 int __kuser_cmpxchg64(const int64_t *oldval,
213 const int64_t *newval,
214 volatile int64_t *ptr);
215
216Input:
217
218 r0 = pointer to oldval
219 r1 = pointer to newval
220 r2 = pointer to target value
221 lr = return address
222
223Output:
224
225 r0 = success code (zero or non-zero)
226 C flag = set if r0 == 0, clear if r0 != 0
227
228Clobbered registers:
229
230 r3, lr, flags
231
232Definition:
233
234 Atomically store the 64-bit value pointed by *newval in *ptr only if *ptr
235 is equal to the 64-bit value pointed by *oldval. Return zero if *ptr was
236 changed or non-zero if no exchange happened.
237
238 The C flag is also set if *ptr was changed to allow for assembly
239 optimization in the calling code.
240
241Usage example:
242
243typedef int (__kuser_cmpxchg64_t)(const int64_t *oldval,
244 const int64_t *newval,
245 volatile int64_t *ptr);
246#define __kuser_cmpxchg64 (*(__kuser_cmpxchg64_t *)0xffff0f60)
247
248int64_t atomic_add64(volatile int64_t *ptr, int64_t val)
249{
250 int64_t old, new;
251
252 do {
253 old = *ptr;
254 new = old + val;
255 } while(__kuser_cmpxchg64(&old, &new, ptr));
256
257 return new;
258}
259
260Notes:
261
262 - This routine already includes memory barriers as needed.
263
264 - Due to the length of this sequence, this spans 2 conventional kuser
265 "slots", therefore 0xffff0f80 is not used as a valid entry point.
266
267 - Valid only if __kuser_helper_version >= 5 (from kernel version 3.1).
diff --git a/Documentation/devicetree/bindings/arm/pmu.txt b/Documentation/devicetree/bindings/arm/pmu.txt
new file mode 100644
index 000000000000..1c044eb320cc
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/pmu.txt
@@ -0,0 +1,21 @@
1* ARM Performance Monitor Units
2
3ARM cores often have a PMU for counting cpu and cache events like cache misses
4and hits. The interface to the PMU is part of the ARM ARM. The ARM PMU
5representation in the device tree should be done as under:-
6
7Required properties:
8
9- compatible : should be one of
10 "arm,cortex-a9-pmu"
11 "arm,cortex-a8-pmu"
12 "arm,arm1176-pmu"
13 "arm,arm1136-pmu"
14- interrupts : 1 combined interrupt or 1 per core.
15
16Example:
17
18pmu {
19 compatible = "arm,cortex-a9-pmu";
20 interrupts = <100 101>;
21};