aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* [ARM] Re-organise die()Russell King2005-10-30
| | | | | | | Provide __die() which can be called from various contexts to provide an oops report. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3069/1: Add spitz irda platform supportRichard Purdie2005-10-30
| | | | | | | | | Patch from Richard Purdie Add spitz irda platform support Signed-off-by: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3068/1: Add corgi irda platform supportRichard Purdie2005-10-30
| | | | | | | | | Patch from Richard Purdie Add corgi irda platform support Signed-off-by: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3067/1: Add poodle irda platform supportRichard Purdie2005-10-30
| | | | | | | | | Patch from Richard Purdie Add poodle irda platform support Signed-off-by: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 3066/1: Fix PXA irda driver suspend/resume functionsRichard Purdie2005-10-30
| | | | | | | | | | Patch from Richard Purdie Update the PXA irda driver to match the recent platform device suspend/resume level changes. Signed-off-by: Richard Purdie <rpurdie@rpsys.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2005-10-30
|\
| * [CRYPTO] Check cra_alignmask against cra_blocksizeHerbert Xu2005-10-29
| | | | | | | | | | | | | | | | | | The cipher code relies on the fact that the block size is a multiple of the required alignment. So we should check this at the time of algorith registration. We also ensure that the block size is bounded by the page size. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [CRYPTO] Simplify one-member scatterlist expressionsHerbert Xu2005-10-29
| | | | | | | | | | | | | | This patch rewrites various occurences of &sg[0] where sg is an array of length one to simply sg. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [PATCH] Use sg_set_buf/sg_init_one where applicableDavid Hardeman2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch uses sg_set_buf/sg_init_one in some places where it was duplicated. Signed-off-by: David Hardeman <david@2gen.com> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Greg KH <greg@kroah.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Garzik <jgarzik@pobox.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * [PATCH] Introduce sg_set_bufHerbert Xu2005-10-29
| | | | | | | | | | | | | | | | | | | | | | | | sg_init_one is a nice tool for the block layer. However, users of struct scatterlist in other subsystems don't usually need the DMA attributes. For them it's a waste of time and space to initialise the whole struct scatterlist structure. Therefore this patch adds a new function sg_set_buf to initialise a scatterlist without zeroing the DMA attributes. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | [PATCH] mm/filemap.c:filemap_populate(): move export.Nikita Danilov2005-10-30
| | | | | | | | | | | | | | | | | | move EXPORT_SYMBOL(filemap_populate) to the proper place: just after function itself: it's easy to miss that function is exported otherwise. Signed-off-by: Nikita Danilov <nikita@clusterfs.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: wider use of for_each_*cpu()John Hawkes2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | In 'mm' change the explicit use of a for-loop using NR_CPUS into the general for_each_cpu() constructs. This widens the scope of potential future optimizations of the general constructs, as well as takes advantage of the existing optimizations of first_cpu() and next_cpu(), which is advantageous when the true CPU count is much smaller than NR_CPUS. Signed-off-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] Remove policy contextualization from mbindChristoph Lameter2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Policy contextualization is only useful for task based policies and not for vma based policies. It may be useful to define allowed nodes that are not accessible from this thread because other threads may have access to these nodes. Without this patch strange memory policy situations may cause an application to fail with out of memory. Example: Let's say we have two threads A and B that share the same address space and a huge array computational array X. Thread A is restricted by its cpuset to nodes 0 and 1 and thread B is restricted by its cpuset to nodes 2 and 3. Thread A now wants to restrict allocations to the first node and thus applies a BIND policy on X to node 0 and 2. The cpuset limits this to node 0. Thus pages for X must be allocated on node 0 now. Thread B now touches a page that has never been used in X and faults in a page. According to the BIND policy of the vma for X the page must be allocated on page 0. However, the cpuset of B does not allow allocation on 0 and 1. Now the application fails in alloc_pages with out of memory. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] Implement sys_* do_* layering in the memory policy layer.Christoph Lameter2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Do a separation between do_xxx and sys_xxx functions. sys_xxx functions take variable sized bitmaps from user space as arguments. do_xxx functions take fixed sized nodemask_t as arguments and may be used from inside the kernel. Doing so simplifies the initialization code. There is no fs = kernel_ds assumption anymore. - Split up get_nodes into get_nodes (which gets the node list) and contextualize_policy which restricts the nodes to those accessible to the task and updates cpusets. - Add comments explaining limitations of bind policy Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug: ppc64 specific hot-add functionsDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is a set of ppc64 specific patches that at least allow compilation/booting with the following configurations: FLATMEM SPARSEMEN SPARSEMEM + MEMORY_HOTPLUG Signed-off-by: Mike Kravetz <kravetz@us.ibm.com> Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug: i386 addition functionsDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | Adds the necessary for non-NUMA hot-add of highmem to an existing zone on i386. Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug: call setup_per_zone_pages_min after hotplugDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: IWAMOTO Toshihiro <iwamoto@valinux.co.jp> > I found the tests does not work well with Dave's patchset. > I've found the followings: > > - setup_per_zone_pages_min() calls should be added in > capture_page_range() and online_pages() > - lru_add_drain() should be called before try_to_migrate_pages() The following patch deals with the first item. Signed-off-by: IWAMOTO Toshihiro <iwamoto@valinux.co.jp> Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug: move section_mem_map alloc to sparse.cDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This basically keeps up from having to extern __kmalloc_section_memmap(). The vaddr_in_vmalloc_area() helper could go in a vmalloc header, but that header gets hard to work with, because it needs some arch-specific macros. Just stick it in here for now, instead of creating another header. Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Lion Vollnhals <webmaster@schiggl.de> Signed-off-by: Jiri Slaby <xslaby@fi.muni.cz> Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug: sysfs and add/remove functionsDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds generic memory add/remove and supporting functions for memory hotplug into a new file as well as a memory hotplug kernel config option. Individual architecture patches will follow. For now, disable memory hotplug when swsusp is enabled. There's a lot of churn there right now. We'll fix it up properly once it calms down. Signed-off-by: Matt Tolentino <matthew.e.tolentino@intel.com> Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug locking: zone span seqlockDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | See the "fixup bad_range()" patch for more information, but this actually creates a the lock to protect things making assumptions about a zone's size staying constant at runtime. Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug locking: node_size_lockDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pgdat->node_size_lock is basically only neeeded in one place in the normal code: show_mem(), which is the arch-specific sysrq-m printing function. Strictly speaking, the architectures not doing memory hotplug do no need this locking in show_mem(). However, they are all included for completeness. This should also make any future consolidation of all of the implementations a little more straightforward. This lock is also held in the sparsemem code during a memory removal, as sections are invalidated. This is the place there pfn_valid() is made false for a memory area that's being removed. The lock is only required when doing pfn_valid() operations on memory which the user does not already have a reference on the page, such as in show_mem(). Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug prep: fixup bad_range()Dave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When doing memory hotplug operations, the size of existing zones can obviously change. This means that zone->zone_{start_pfn,spanned_pages} can change. There are currently no locks that protect these structure members. However, they are rarely accessed at runtime. Outside of swsusp, the only place that I can find is bad_range(). So, split bad_range() up into two pieces: one that needs to be locked and anther that doesn't. Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug prep: __section_nr helperDave Hansen2005-10-30
| | | | | | | | | | | | | | | | A little helper that we use in the hotplug code. Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug prep: break out zone initializationDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | If a zone is empty at boot-time and then hot-added to later, it needs to run the same init code that would have been run on it at boot. This patch breaks out zone table and per-cpu-pages functions for use by the hotplug code. You can almost see all of the free_area_init_core() function on one page now. :) Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] memory hotplug prep: kill local_mapnrDave Hansen2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following series implements memory hot-add for ppc64 and i386. There are x86_64 and ia64 implementations that will be submitted shortly as well, through the normal maintainers. This patch: local_mapnr is unused, except for in an alpha header. Keep the alpha one, kill the rest. Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] .text page fault SMP scalability optimizationAndrea Arcangeli2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We had a problem on ppc64 where with more than 4 threads a large system wouldn't scale well while faulting in the .text (most of the time was spent in the kernel despite it was an userland compute intensive app). The reason is the useless overwrite of the same pte from all cpu. I fixed it this way (verified on an older kernel but the forward port is almost identical). This will benefit all archs not just ppc64. Signed-off-by: Andrea Arcangeli <andrea@suse.de> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] hugetlb: overcommit accounting checkAdam Litke2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Basic overcommit checking for hugetlb_file_map() based on an implementation used with demand faulting in SLES9. Since demand faulting can't guarantee the availability of pages at mmap time, this patch implements a basic sanity check to ensure that the number of huge pages required to satisfy the mmap are currently available. Despite the obvious race, I think it is a good start on doing proper accounting. I'd like to work towards an accounting system that mimics the semantics of normal pages (especially for the MAP_PRIVATE/COW case). That work is underway and builds on what this patch starts. Huge page shared memory segments are simpler and still maintain their commit on shmget semantics. Signed-off-by: Adam Litke <agl@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] hugetlb: demand fault handlerAdam Litke2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Below is a patch to implement demand faulting for huge pages. The main motivation for changing from prefaulting to demand faulting is so that huge page memory areas can be allocated according to NUMA policy. Thanks to consolidated hugetlb code, switching the behavior requires changing only one fault handler. The bulk of the patch just moves the logic from hugelb_prefault() to hugetlb_pte_fault() and find_get_huge_page(). Signed-off-by: Adam Litke <agl@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] hugetlb: remove repeated codeKrishnakumar R2005-10-30
| | | | | | | | | | | | | | | | | | Clean up some repeated code related to HugeTLB. hugetlb_zero_setup would have already allocated the file->f_op. Signed-off-by: Krishnakumar. R <rkrishnakumar@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] cleanup hugelbfs_forget_inodeChristoph Hellwig2005-10-30
| | | | | | | | | | | | | | | | | | | | Reformat hugelbfs_forget_inode and add the missing but harmless write_inode_now call. It looks the same as generic_forget_inode now except for the call to truncate_hugepages instead of truncate_inode_pages. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] kill hugelbfs_do_delete_inodeChristoph Hellwig2005-10-30
| | | | | | | | | | | | | | | | | | hugetlbfs_do_delete_inode is the same as generic_delete_inode now, so remove it in favour of the latter. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] hugetlbfs: clean up hugetlbfs_delete_inodeChristoph Hellwig2005-10-30
| | | | | | | | | | | | | | | | | | | | | | Make hugetlbfs looks the same as generic_detelte_inode, fixing a bunch of missing updates to it at the same time. Rename it to hugetlbfs_do_delete_inode and add a real hugetlbfs_delete_inode that implements ->delete_inode. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] hugetlbfs: move free_inodes accountingChristoph Hellwig2005-10-30
| | | | | | | | | | | | | | | | | | | | | | Move hugetlbfs accounting into ->alloc_inode / ->destroy_inode. This keeps the code simpler, fixes a loeak where a failing inode allocation wouldn't decrement the counter and moves hugetlbfs_delete_inode and hugetlbfs_forget_inode closer to their generic counterparts. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: update comments to pte lockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | Updated several references to page_table_lock in common code comments. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: fix rss and mmlist lockingHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A couple of oddities were guarded by page_table_lock, no longer properly guarded when that is split. The mm_counters of file_rss and anon_rss: make those an atomic_t, or an atomic64_t if the architecture supports it, in such a case. Definitions by courtesy of Christoph Lameter: who spent considerable effort on more scalable ways of counting, but found insufficient benefit in practice. And adding an mm with swap to the mmlist for swapoff: the list is well- guarded by its own lock, but the list_empty check now has to be repeated inside it. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: split page table lockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with a many-threaded application which concurrently initializes different parts of a large anonymous area. This patch corrects that, by using a separate spinlock per page table page, to guard the page table entries in that page, instead of using the mm's single page_table_lock. (But even then, page_table_lock is still used to guard page table allocation, and anon_vma allocation.) In this implementation, the spinlock is tucked inside the struct page of the page table page: with a BUILD_BUG_ON in case it overflows - which it would in the case of 32-bit PA-RISC with spinlock debugging enabled. Splitting the lock is not quite for free: another cacheline access. Ideally, I suppose we would use split ptlock only for multi-threaded processes on multi-cpu machines; but deciding that dynamically would have its own costs. So for now enable it by config, at some number of cpus - since the Kconfig language doesn't support inequalities, let preprocessor compare that with NR_CPUS. But I don't think it's worth being user-configurable: for good testing of both split and unsplit configs, split now at 4 cpus, and perhaps change that to 8 later. There is a benefit even for singly threaded processes: kswapd can be attacking one part of the mm while another part is busy faulting. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: uml kill unusedHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | In worrying over the various pte operations in different architectures, I came across some unused functions in UML: remove mprotect_kernel_vm, protect_vm_page and addr_pte. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: uml pte atomicityHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | There's usually a good reason when a pte is examined without the lock; but it makes me nervous when the pointer is dereferenced more than once. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: cris v32 mmu_context_lockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | The cris v32 switch_mm guards get_mmu_context with next->page_table_lock: good it's not really SMP yet, since get_mmu_context messes with global variables affecting other mms. Replace by global mmu_context_lock. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: parisc pte atomicityHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | There's a worrying function translation_exists in parisc cacheflush.h, unaffected by split ptlock since flush_dcache_page is using it on some other mm, without any relevant lock. Oh well, make it a slightly more robust by factoring the pfn check within it. And it looked liable to confuse a camouflaged swap or file entry with a good pte: fix that too. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: arm ready for split ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prepare arm for the split page_table_lock: three issues. Signal handling's preserve and restore of iwmmxt context currently involves reading and writing that context to and from user space, while holding page_table_lock to secure the user page(s) against kswapd. If we split the lock, then the structure might span two pages, secured by to read into and write from a kernel stack buffer, copying that out and in without locking (the structure is 160 bytes in size, and here we're near the top of the kernel stack). Or would the overhead be noticeable? arm_syscall's cmpxchg emulation use pte_offset_map_lock, instead of pte_offset_map and mm-wide page_table_lock; and strictly, it should now also take mmap_sem before descending to pmd, to guard against another thread munmapping, and the page table pulled out beneath this thread. Updated two comments in fault-armv.c. adjust_pte is interesting, since its modification of a pte in one part of the mm depends on the lock held when calling update_mmu_cache for a pte in some other part of that mm. This can't be done with a split page_table_lock (and we've already taken the lowest lock in the hierarchy here): so we'll have to disable split on arm, unless CONFIG_CPU_CACHE_VIPT to ensures adjust_pte never used. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: i386 sh sh64 ready for split ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use pte_offset_map_lock, instead of pte_offset_map (or inappropriate pte_offset_kernel) and mm-wide page_table_lock, in sundry arch places. The i386 vm86 mark_screen_rdonly: yes, there was and is an assumption that the screen fits inside the one page table, as indeed it does. The sh __do_page_fault: which handles both kernel faults (without lock) and user mm faults (locked - though it set_pte without locking before). The sh64 flush_cache_range and helpers: which wrongly thought callers held page_table_lock before (only its tlb_start_vma did, and no longer does so); moved the flush loop down, and adjusted the large versus small range decision to consider a range which spans page tables as large. Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: follow_page with inner ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Final step in pushing down common core's page_table_lock. follow_page no longer wants caller to hold page_table_lock, uses pte_offset_map_lock itself; and so no page_table_lock is taken in get_user_pages itself. But get_user_pages (and get_futex_key) do then need follow_page to pin the page for them: take Daniel's suggestion of bitflags to follow_page. Need one for WRITE, another for TOUCH (it was the accessed flag before: vanished along with check_user_page_readable, but surely get_numa_maps is wrong to mark every page it finds as accessed), another for GET. And another, ANON to dispose of untouched_anonymous_page: it seems silly for that to descend a second time, let follow_page observe if there was no page table and return ZERO_PAGE if so. Fix minor bug in that: check VM_LOCKED - make_pages_present ought to make readonly anonymous present. Give get_numa_maps a cond_resched while we're there. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: kill check_user_page_readableHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check_user_page_readable is a problematic variant of follow_page. It's used only by oprofile's i386 and arm backtrace code, at interrupt time, to establish whether a userspace stackframe is currently readable. This is problematic, because we want to push the page_table_lock down inside follow_page, and later split it; whereas oprofile is doing a spin_trylock on it (in the i386 case, forgotten in the arm case), and needs that to pin perhaps two pages spanned by the stackframe (which might be covered by different locks when we split). I think oprofile is going about this in the wrong way: it doesn't need to know the area is readable (neither i386 nor arm uses read protection of user pages), it doesn't need to pin the memory, it should simply __copy_from_user_inatomic, and see if that succeeds or not. Sorry, but I've not got around to devising the sparse __user annotations for this. Then we can eliminate check_user_page_readable, and return to a single follow_page without the __follow_page variants. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: rmap with inner ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rmap's page_check_address descend without page_table_lock. First just pte_offset_map in case there's no pte present worth locking for, then take page_table_lock for the full check, and pass ptl back to caller in the same style as pte_offset_map_lock. __xip_unmap, page_referenced_one and try_to_unmap_one use pte_unmap_unlock. try_to_unmap_cluster also. page_check_address reformatted to avoid progressive indentation. No use is made of its one error code, return NULL when it fails. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: xip_unmap ZERO_PAGE fixHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | Small fix to the PageReserved patch: the mips ZERO_PAGE(address) depends on address, so __xip_unmap is wrong to initialize page with that before address is initialized; and in fact must re-evaluate it each iteration. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: unmap_vmas with inner ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the page_table_lock from around the calls to unmap_vmas, and replace the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are now safe to descend without page_table_lock. Don't attempt fancy locking for hugepages, just take page_table_lock in unmap_hugepage_range. Which makes zap_hugepage_range, and the hugetlb test in zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway. Nor does unmap_vmas have much use for its mm arg now. The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without page_table_lock: if they're implemented at all, they typically come down to flush_cache_range (usually done outside page_table_lock) and flush_tlb_range (which we already audited for the mprotect case). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: unlink vma before pagetablesHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In most places the descent from pgd to pud to pmd to pte holds mmap_sem (exclusively or not), which ensures that free_pgtables cannot be freeing page tables from any level at the same time. But truncation and reverse mapping descend without mmap_sem. No problem: just make sure that a vma is unlinked from its prio_tree (or nonlinear list) and from its anon_vma list, after zapping the vma, but before freeing its page tables. Then neither vmtruncate nor rmap can reach that vma whose page tables are now volatile (nor do they need to reach it, since all its page entries have been zapped by this stage). The i_mmap_lock and anon_vma->lock already serialize this correctly; but the locking hierarchy is such that we cannot take them while holding page_table_lock. Well, we're trying to push that down anyway. So in this patch, move anon_vma_unlink and unlink_file_vma into free_pgtables, at the same time as moving page_table_lock around calls to unmap_vmas. tlb_gather_mmu and tlb_finish_mmu then fall outside the page_table_lock, but we made them preempt_disable and preempt_enable earlier; and a long source audit of all the architectures has shown no problem with removing page_table_lock from them. free_pgtables doesn't need page_table_lock for itself, nor for what it calls; tlb->mm->nr_ptes is usually protected by page_table_lock, but partly by non-exclusive mmap_sem - here it's decremented with exclusive mmap_sem, or mm_users 0. update_hiwater_rss and vm_unacct_memory don't need page_table_lock either. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: flush_tlb_range outside ptlockHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There was one small but very significant change in the previous patch: mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4, but that doesn't prove it safe in 2.6. On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which has always been called from outside page_table_lock in dup_mmap, and is so proved safe. Others required a deeper audit: I could find no reliance on page_table_lock in any; but in ia64 and parisc found some code which looks a bit as if it might want preemption disabled. That won't do any actual harm, so pending a decision from the maintainers, disable preemption there. Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and flush_tlb_page entries in cachetlb.txt: they were rather misleading (what generic code does is different from what usually happens), the rules are now changing, and it's not yet clear where we'll end up (will the generic tlb_flush_mmu happen always under lock? never under lock? or sometimes under and sometimes not?). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] mm: pte_offset_map_lock loopsHugh Dickins2005-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert those common loops using page_table_lock on the outside and pte_offset_map within to use just pte_offset_map_lock within instead. These all hold mmap_sem (some exclusively, some not), so at no level can a page table be whipped away from beneath them. But whereas pte_alloc loops tested with the "atomic" pmd_present, these loops are testing with pmd_none, which on i386 PAE tests both lower and upper halves. That's now unsafe, so add a cast into pmd_none to test only the vital lower half: we lose a little sensitivity to a corrupt middle directory, but not enough to worry about. It appears that i386 and UML were the only architectures vulnerable in this way, and pgd and pud no problem. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>