diff options
author | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2011-07-25 20:12:32 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-07-25 23:57:11 -0400 |
commit | 2efaca927f5cd7ecd0f1554b8f9b6a9a2c329c03 (patch) | |
tree | 1bea042a7c712e861d7734db59b3311375c439c3 /mm/memory.c | |
parent | 72c4783210f77fd743f0a316858d33f27db51e7c (diff) |
mm/futex: fix futex writes on archs with SW tracking of dirty & young
I haven't reproduced it myself but the fail scenario is that on such
machines (notably ARM and some embedded powerpc), if you manage to hit
that futex path on a writable page whose dirty bit has gone from the PTE,
you'll livelock inside the kernel from what I can tell.
It will go in a loop of trying the atomic access, failing, trying gup to
"fix it up", getting succcess from gup, go back to the atomic access,
failing again because dirty wasn't fixed etc...
So I think you essentially hang in the kernel.
The scenario is probably rare'ish because affected architecture are
embedded and tend to not swap much (if at all) so we probably rarely hit
the case where dirty is missing or young is missing, but I think Shan has
a piece of SW that can reliably reproduce it using a shared writable
mapping & fork or something like that.
On archs who use SW tracking of dirty & young, a page without dirty is
effectively mapped read-only and a page without young unaccessible in the
PTE.
Additionally, some architectures might lazily flush the TLB when relaxing
write protection (by doing only a local flush), and expect a fault to
invalidate the stale entry if it's still present on another processor.
The futex code assumes that if the "in_atomic()" access -EFAULT's, it can
"fix it up" by causing get_user_pages() which would then be equivalent to
taking the fault.
However that isn't the case. get_user_pages() will not call
handle_mm_fault() in the case where the PTE seems to have the right
permissions, regardless of the dirty and young state. It will eventually
update those bits ... in the struct page, but not in the PTE.
Additionally, it will not handle the lazy TLB flushing that can be
required by some architectures in the fault case.
Basically, gup is the wrong interface for the job. The patch provides a
more appropriate one which boils down to just calling handle_mm_fault()
since what we are trying to do is simulate a real page fault.
The futex code currently attempts to write to user memory within a
pagefault disabled section, and if that fails, tries to fix it up using
get_user_pages().
This doesn't work on archs where the dirty and young bits are maintained
by software, since they will gate access permission in the TLB, and will
not be updated by gup().
In addition, there's an expectation on some archs that a spurious write
fault triggers a local TLB flush, and that is missing from the picture as
well.
I decided that adding those "features" to gup() would be too much for this
already too complex function, and instead added a new simpler
fixup_user_fault() which is essentially a wrapper around handle_mm_fault()
which the futex code can call.
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix some nits Darren saw, fiddle comment layout]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reported-by: Shan Hai <haishan.bai@gmail.com>
Tested-by: Shan Hai <haishan.bai@gmail.com>
Cc: David Laight <David.Laight@ACULAB.COM>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Darren Hart <darren.hart@intel.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r-- | mm/memory.c | 58 |
1 files changed, 57 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c index 3c9f3aa8332e..a56e3ba816b2 100644 --- a/mm/memory.c +++ b/mm/memory.c | |||
@@ -1805,7 +1805,63 @@ next_page: | |||
1805 | } | 1805 | } |
1806 | EXPORT_SYMBOL(__get_user_pages); | 1806 | EXPORT_SYMBOL(__get_user_pages); |
1807 | 1807 | ||
1808 | /** | 1808 | /* |
1809 | * fixup_user_fault() - manually resolve a user page fault | ||
1810 | * @tsk: the task_struct to use for page fault accounting, or | ||
1811 | * NULL if faults are not to be recorded. | ||
1812 | * @mm: mm_struct of target mm | ||
1813 | * @address: user address | ||
1814 | * @fault_flags:flags to pass down to handle_mm_fault() | ||
1815 | * | ||
1816 | * This is meant to be called in the specific scenario where for locking reasons | ||
1817 | * we try to access user memory in atomic context (within a pagefault_disable() | ||
1818 | * section), this returns -EFAULT, and we want to resolve the user fault before | ||
1819 | * trying again. | ||
1820 | * | ||
1821 | * Typically this is meant to be used by the futex code. | ||
1822 | * | ||
1823 | * The main difference with get_user_pages() is that this function will | ||
1824 | * unconditionally call handle_mm_fault() which will in turn perform all the | ||
1825 | * necessary SW fixup of the dirty and young bits in the PTE, while | ||
1826 | * handle_mm_fault() only guarantees to update these in the struct page. | ||
1827 | * | ||
1828 | * This is important for some architectures where those bits also gate the | ||
1829 | * access permission to the page because they are maintained in software. On | ||
1830 | * such architectures, gup() will not be enough to make a subsequent access | ||
1831 | * succeed. | ||
1832 | * | ||
1833 | * This should be called with the mm_sem held for read. | ||
1834 | */ | ||
1835 | int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, | ||
1836 | unsigned long address, unsigned int fault_flags) | ||
1837 | { | ||
1838 | struct vm_area_struct *vma; | ||
1839 | int ret; | ||
1840 | |||
1841 | vma = find_extend_vma(mm, address); | ||
1842 | if (!vma || address < vma->vm_start) | ||
1843 | return -EFAULT; | ||
1844 | |||
1845 | ret = handle_mm_fault(mm, vma, address, fault_flags); | ||
1846 | if (ret & VM_FAULT_ERROR) { | ||
1847 | if (ret & VM_FAULT_OOM) | ||
1848 | return -ENOMEM; | ||
1849 | if (ret & (VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE)) | ||
1850 | return -EHWPOISON; | ||
1851 | if (ret & VM_FAULT_SIGBUS) | ||
1852 | return -EFAULT; | ||
1853 | BUG(); | ||
1854 | } | ||
1855 | if (tsk) { | ||
1856 | if (ret & VM_FAULT_MAJOR) | ||
1857 | tsk->maj_flt++; | ||
1858 | else | ||
1859 | tsk->min_flt++; | ||
1860 | } | ||
1861 | return 0; | ||
1862 | } | ||
1863 | |||
1864 | /* | ||
1809 | * get_user_pages() - pin user pages in memory | 1865 | * get_user_pages() - pin user pages in memory |
1810 | * @tsk: the task_struct to use for page fault accounting, or | 1866 | * @tsk: the task_struct to use for page fault accounting, or |
1811 | * NULL if faults are not to be recorded. | 1867 | * NULL if faults are not to be recorded. |