| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enables NAND subpage read functionality.
If upper layer drivers are requesting to read non page aligned data NAND
subpage-read functionality reads the only whose ECC regions which include
requested data when original code reads whole page.
This significantly improves performance in many cases.
Here are some digits :
UBI volume mount time
No subpage reads: 5.75 seconds
Subpage read patch: 2.42 seconds
Open/stat time for files on JFFS2 volume:
No subpage read 0m 5.36s
Subpage read 0m 2.88s
Signed-off-by Alexey Korolev <akorolev@infradead.org>
Acked-by: Artem Bityutskiy <dedekind@infradead.org>
Acked-by: Jörn Engel <joern@logfs.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fix only affects UBI debugging.
If the the background thread is disabled for debugging purposes,
start it anyway, because otherwise we see tonns of kernel debugging
complaints like this:
INFO: task ubi_bgt0d:26857 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ubi_bgt0d D dd37bf94 0 26857 2
dd37bfcc 00000086 f8e17cea dd37bf94 00000046 00000000 00000000 f5c62430
f5c62430 f5c62590 c2a09c80 f6cbd498 dd8e9cbc 00000296 dd37bfb0 00000296
dd8e9cb8 dd8e9cbc dd37bfcc c0119774 00000000 00000000 c0132e89 f6961560
Call Trace:
[<f8e17cea>] ? ubi_thread+0x0/0x127 [ubi]
[<c0119774>] ? complete+0x43/0x4b
[<c0132e89>] ? kthread+0x0/0x5b
[<f8e17cea>] ? ubi_thread+0x0/0x127 [ubi]
[<c0132eae>] kthread+0x25/0x5b
[<c0132e89>] ? kthread+0x0/0x5b
[<c0104953>] kernel_thread_helper+0x7/0x14
=======================
So start it, and go sleep inside it, instead of creating it and never
start.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix the following warning:
drivers/mtd/ubi/vmt.c: In function 'ubi_rename_volumes':
drivers/mtd/ubi/vmt.c:642: warning: statement with no effect
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Before UBI got into mainline, there was a slight flash format
change - we did not have sequence number support, then added it.
We have carried full support of those ancient images till this
moment. Now the support is removed, well, not fully removed.
Now UBI will support only _clean_ old images, which were cleanly
detached last time (just before kernel upgrade). This is most
likely the case.
But we will not support unclean ancient images. Surprisingly,
this allows us to remove a big chunk of legacy code.
And the same should be true for downgrading: clean images should
downgrade fine, but unclean ones will not.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| | |
No functional changes, just tweak comments to make kernel-doc
work fine and stop complaining.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Just out or curiousity ran checkpatch.pl for whole UBI,
and discovered there are quite a few of stylistic issues.
Fix them.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| | |
This is probably a copy-paste bug - we torture the old PEB
in the atomic LEB change function, but we should not do this.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
If bit-flips happen often, UBI prints to many messages. Lessen
the amount by only printing the messages when the PEB has been
scrubbed. Also, print torturing messages.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| | |
Quite useful ioctl which allows to make atomic system upgrades.
The idea belongs to Richard Titmuss <richard_titmuss@logitech.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| | |
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| | |
Hch asked not to use "unit" for sub-systems, let it be so.
Also some other commentaries modifications.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| | |
The ubi_err() macro will add \n.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Check that volume name is not shorter than 'name_len'.
No need to copy the trailing zero byte because whole array
was zeroed earlier.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| | |
To flush MTD device caches.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| | |
Signed-off-by: Bruce Leonard <brucle@selinc.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
leb_read_unlock() may be called simultaniously by several tasks.
The would race at the following code:
up_read(&le->mutex);
if (free)
kfree(le);
And it is possible that one task frees 'le' before the other tasks
do 'up_read()'. Fix this by doing up_read and free inside the
'ubi->ltree' lock. Below it the oops we had because of this:
BUG: spinlock bad magic on CPU#0, integck/7504
BUG: unable to handle kernel paging request at 6b6b6c4f
IP: [<c0211221>] spin_bug+0x5c/0xdb
*pde = 00000000 Oops: 0000 [#1] PREEMPT SMP Modules linked in: ubifs ubi nandsim nand nand_ids nand_ecc video output
Pid: 7504, comm: integck Not tainted (2.6.26-rc3ubifs26 #8)
EIP: 0060:[<c0211221>] EFLAGS: 00010002 CPU: 0
EIP is at spin_bug+0x5c/0xdb
EAX: 00000032 EBX: 6b6b6b6b ECX: 6b6b6b6b EDX: f7f7ce30
ESI: f76491dc EDI: c044f51f EBP: e8a736cc ESP: e8a736a8
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process integck (pid: 7504, ti=e8a72000 task=f7f7ce30 task.ti=e8a72000)
Stack: c044f754 c044f51f 00000000 f7f7d024 00001d50 00000001 f76491dc 00000296 f6df50e0 e8a736d8 c02112f0 f76491dc e8a736e8 c039157a f7d9e830 f76491d8 e8a7370c c020b975 f76491dc 00000296 f76491f8 00000000 f76491d8 00000000 Call Trace:
[<c02112f0>] ? _raw_spin_unlock+0x50/0x7c
[<c039157a>] ? _spin_unlock_irqrestore+0x20/0x58
[<c020b975>] ? rwsem_wake+0x4b/0x122
[<c0390e0a>] ? call_rwsem_wake+0xa/0xc
[<c0139ee7>] ? up_read+0x28/0x31
[<f8873b3c>] ? leb_read_unlock+0x73/0x7b [ubi]
[<f88742a3>] ? ubi_eba_read_leb+0x195/0x2b0 [ubi]
[<f8872a04>] ? ubi_leb_read+0xaf/0xf8 [ubi]
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Normally UBI volumes are freed in the release function of
the struct device object. However, on error path they may
have to be freed before the struct device objects have been
initialized.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
UBI forgets to free internal volumes when detaching MTD device.
Fix this.
Pointed-out-by: Adrian Hunter <ext-adrian.hunter@nokia.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ubi_free_volume() function sets ubi->volumes[] to NULL, so
ubi_eba_close() is useless, it does not free what has to be freed.
So zap it and free vol->eba_tbl at the volume release function.
Pointed-out-by: Adrian Hunter <ext-adrian.hunter@nokia.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
UBI already checks that @min io size is the power of 2 at io_init.
It is save to use bit operations then.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of correctly pad the buffer wich we are writing to the
eraseblock during update, we used weird construct:
memset(buf + len, 0xFF, len - len);
Fix this.
Signed-off-by: Kyungmin Park <kmpark@infradead.org>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
It is not clear why we schedule PEB for scrubbing in case of
-EBADMSG. Elaborate.
Requested-by: Kyungmin Park <kmpark@infradead.org>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |
| |
| |
| |
| |
| |
| | |
Print error code if checking failed which is very useful
to identify problems.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/frob/linux-2.6-roland
* 'x86/auditsc' of git://git.kernel.org/pub/scm/linux/kernel/git/frob/linux-2.6-roland:
i386 syscall audit fast-path
x86_64 ia32 syscall audit fast-path
x86_64 syscall audit fast-path
x86_64: remove bogus optimization in sysret_signal
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds fast paths for 32-bit syscall entry and exit when
TIF_SYSCALL_AUDIT is set, but no other kind of syscall tracing.
These paths does not need to save and restore all registers as
the general case of tracing does. Avoiding the iret return path
when syscall audit is enabled helps performance a lot.
Signed-off-by: Roland McGrath <roland@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds fast paths for 32-bit syscall entry and exit when
TIF_SYSCALL_AUDIT is set, but no other kind of syscall tracing.
These paths does not need to save and restore all registers as
the general case of tracing does. Avoiding the iret return path
when syscall audit is enabled helps performance a lot.
Signed-off-by: Roland McGrath <roland@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds a fast path for 64-bit syscall entry and exit when
TIF_SYSCALL_AUDIT is set, but no other kind of syscall tracing.
This path does not need to save and restore all registers as
the general case of tracing does. Avoiding the iret return path
when syscall audit is enabled helps performance a lot.
Signed-off-by: Roland McGrath <roland@redhat.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This short-circuit path in sysret_signal looks wrong to me.
AFAICT, in practice the branch is never taken--and if it were,
it would go wrong. To wit, try loading a module whose init
function does set_thread_flag(TIF_IRET), and see insmod crash
(presumably with a wrong user stack pointer).
This is because the FIXUP_TOP_OF_STACK work hasn't been done yet
when we jump around the call to ptregscall_common and get to
int_with_check--where it expects the user RSP,SS,CS and EFLAGS to
have been stored by FIXUP_TOP_OF_STACK.
I don't think it's normally possible to get to sysret_signal with no
_TIF_DO_NOTIFY_MASK bits set anyway, so these two instructions are
already superfluous. If it ever did happen, it is harmless to call
do_notify_resume with nothing for it to do.
Signed-off-by: Roland McGrath <roland@redhat.com>
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'sched/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
sched: hrtick_enabled() should use cpu_active()
sched, x86: clean up hrtick implementation
sched: fix build error, provide partition_sched_domains() unconditionally
sched: fix warning in inc_rt_tasks() to not declare variable 'rq' if it's not needed
cpu hotplug: Make cpu_active_map synchronization dependency clear
cpu hotplug, sched: Introduce cpu_active_map and redo sched domain managment (take 2)
sched: rework of "prioritize non-migratable tasks over migratable ones"
sched: reduce stack size in isolated_cpu_setup()
Revert parts of "ftrace: do not trace scheduler functions"
Fixed up conflicts in include/asm-x86/thread_info.h (due to the
TIF_SINGLESTEP unification vs TIF_HRTICK_RESCHED removal) and
kernel/sched_fair.c (due to cpu_active_map vs for_each_cpu_mask_nr()
introduction).
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Peter pointed out that hrtick_enabled() should use cpu_active().
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
random uvesafb failures were reported against Gentoo:
http://bugs.gentoo.org/show_bug.cgi?id=222799
and Mihai Moldovan bisected it back to:
> 8f4d37ec073c17e2d4aa8851df5837d798606d6f is first bad commit
> commit 8f4d37ec073c17e2d4aa8851df5837d798606d6f
> Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Date: Fri Jan 25 21:08:29 2008 +0100
>
> sched: high-res preemption tick
Linus suspected it to be hrtick + vm86 interaction and observed:
> Btw, Peter, Ingo: I think that commit is doing bad things. They aren't
> _incorrect_ per se, but they are definitely bad.
>
> Why?
>
> Using random _TIF_WORK_MASK flags is really impolite for doing
> "scheduling" work. There's a reason that arch/x86/kernel/entry_32.S
> special-cases the _TIF_NEED_RESCHED flag: we don't want to exit out of
> vm86 mode unnecessarily.
>
> See the "work_notifysig_v86" label, and how it does that
> "save_v86_state()" thing etc etc.
Right, I never liked having to fiddle with those TIF flags. Initially I
needed it because the hrtimer base lock could not nest in the rq lock.
That however is fixed these days.
Currently the only reason left to fiddle with the TIF flags is remote
wakeups. We cannot program a remote cpu's hrtimer. I've been thinking
about using the new and improved IPI function call stuff to implement
hrtimer_start_on().
However that does require that smp_call_function_single(.wait=0) works
from interrupt context - /me looks at the latest series from Jens - Yes
that does seem to be supported, good.
Here's a stab at cleaning this stuff up ...
Mihai reported test success as well.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Mihai Moldovan <ionic@ionic.de>
Cc: Michal Januszewski <spock@gentoo.org>
Cc: Antonino Daplas <adaplas@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
not needed
Fix inc_rt_tasks() to not declare variable 'rq' if it's not needed. It is
declared if CONFIG_SMP or CONFIG_RT_GROUP_SCHED, but only used if CONFIG_SMP.
This is a consequence of patch 1f11eb6a8bc92536d9e93ead48fa3ffbd1478571 plus
patch 1100ac91b6af02d8639d518fad5b434b1bf44ed6.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
provide an empty partition_sched_domains() definition for the UP case:
include/linux/cpuset.h: In function ‘rebuild_sched_domains':
include/linux/cpuset.h:163: error: implicit declaration of function ‘partition_sched_domains'
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This goes on top of the cpu_active_map (take 2) patch.
Currently we depend on the stop_machine to provide nescessesary
synchronization for the cpu_active_map updates.
As Dmitry Adamushko pointed this is fragile and is not much clearer
than the previous scheme. In other words we do not want to depend on
the internal stop machine operation here.
So make the synchronization rules clear by doing synchronize_sched()
after clearing out cpu active bit.
Tested on quad-Core2 with:
while true; do
for i in 1 2 3; do
echo 0 > /sys/devices/system/cpu/cpu$i/online
done
for i in 1 2 3; do
echo 1 > /sys/devices/system/cpu/cpu$i/online
done
done
and
stress -c 200
No lockdep, preempt or other complaints.
Signed-off-by: Max Krasnyansky <maxk@qualcomm.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
(take 2)
This is based on Linus' idea of creating cpu_active_map that prevents
scheduler load balancer from migrating tasks to the cpu that is going
down.
It allows us to simplify domain management code and avoid unecessary
domain rebuilds during cpu hotplug event handling.
Please ignore the cpusets part for now. It needs some more work in order
to avoid crazy lock nesting. Although I did simplfy and unify domain
reinitialization logic. We now simply call partition_sched_domains() in
all the cases. This means that we're using exact same code paths as in
cpusets case and hence the test below cover cpusets too.
Cpuset changes to make rebuild_sched_domains() callable from various
contexts are in the separate patch (right next after this one).
This not only boots but also easily handles
while true; do make clean; make -j 8; done
and
while true; do on-off-cpu 1; done
at the same time.
(on-off-cpu 1 simple does echo 0/1 > /sys/.../cpu1/online thing).
Suprisingly the box (dual-core Core2) is quite usable. In fact I'm typing
this on right now in gnome-terminal and things are moving just fine.
Also this is running with most of the debug features enabled (lockdep,
mutex, etc) no BUG_ONs or lockdep complaints so far.
I believe I addressed all of the Dmitry's comments for original Linus'
version. I changed both fair and rt balancer to mask out non-active cpus.
And replaced cpu_is_offline() with !cpu_active() in the main scheduler
code where it made sense (to me).
Signed-off-by: Max Krasnyanskiy <maxk@qualcomm.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Gregory Haskins <ghaskins@novell.com>
Cc: dmitry.adamushko@gmail.com
Cc: pj@sgi.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | |/ /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
(1) handle in a generic way all cases when a newly woken-up task is
not migratable (not just a corner case when "rt_se->nr_cpus_allowed ==
1")
(2) if current is to be preempted, then make sure "p" will be picked
up by pick_next_task_rt().
i.e. move task's group at the head of its list as well.
currently, it's not a case for the group-scheduling case as described
here: http://www.ussg.iu.edu/hypermail/linux/kernel/0807.0/0134.html
Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Remove 16k stack requirements in isolated_cpu_setup when NR_CPUS=4096.
Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
the removal of -mno-spe in the !ftrace case was not intended.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'cpus4096-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (31 commits)
NR_CPUS: Replace NR_CPUS in speedstep-centrino.c
cpumask: Provide a generic set of CPUMASK_ALLOC macros, FIXUP
NR_CPUS: Replace NR_CPUS in cpufreq userspace routines
NR_CPUS: Replace per_cpu(..., smp_processor_id()) with __get_cpu_var
NR_CPUS: Replace NR_CPUS in arch/x86/kernel/genapic_flat_64.c
NR_CPUS: Replace NR_CPUS in arch/x86/kernel/genx2apic_uv_x.c
NR_CPUS: Replace NR_CPUS in arch/x86/kernel/cpu/proc.c
NR_CPUS: Replace NR_CPUS in arch/x86/kernel/cpu/mcheck/mce_64.c
cpumask: Optimize cpumask_of_cpu in lib/smp_processor_id.c, fix
cpumask: Use optimized CPUMASK_ALLOC macros in the centrino_target
cpumask: Provide a generic set of CPUMASK_ALLOC macros
cpumask: Optimize cpumask_of_cpu in lib/smp_processor_id.c
cpumask: Optimize cpumask_of_cpu in kernel/time/tick-common.c
cpumask: Optimize cpumask_of_cpu in drivers/misc/sgi-xp/xpc_main.c
cpumask: Optimize cpumask_of_cpu in arch/x86/kernel/ldt.c
cpumask: Optimize cpumask_of_cpu in arch/x86/kernel/io_apic_64.c
cpumask: Replace cpumask_of_cpu with cpumask_of_cpu_ptr
Revert "cpumask: introduce new APIs"
cpumask: make for_each_cpu_mask a bit smaller
net: Pass reference to cpumask variable in net/sunrpc/svc.c
...
Fix up trivial conflicts in drivers/cpufreq/cpufreq.c manually
|
| | |\ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Conflicts:
net/sunrpc/svc.c
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Some cleanups in speedstep-centrino.c for NR_CPUS=4096.
* Use new CPUMASK_PTR (instead of old CPUMASK_VAR).
* Replace arrays sized by NR_CPUS with percpu variables.
* Cleanup some formatting problems (>80 chars per line)
and other checkpatch complaints.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* Rename CPUMASK_VAR --> CPUMASK_PTR (and simplify)
* Fix a semantic error in CPUMASK_ALLOC
* Add a bit of commentry to cpumask.h
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* Replace arrays sized by NR_CPUS with percpu variables.
Prior reference: http://marc.info/?l=linux-kernel&m=120251421825989&w=4
Subject: [PATCH 1/4] cpufreq: change cpu freq tables to per_cpu variables
From: Mike Travis <travis () sgi ! com>
Date: 2008-02-08 23:37:39
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* Slight optimization when getting one's own cpu_info percpu data.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* nr_cpu_ids should be used to determine if a percpu area is
available for a given cpu.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* Replace NR_CPUS loop with for_each_possible_cpu().
* nr_cpu_ids should be used to determine if a percpu area is
available for a given cpu.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* Use nr_cpu_ids instead of NR_CPUS to limit traversal of cpu online map.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* nr_cpu_ids should be used to allocate arrays based on the number of
cpu's present.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|