aboutsummaryrefslogtreecommitdiffstats
path: root/fs/proc/vmcore.c
Commit message (Collapse)AuthorAge
* vmcore: fix PT_NOTE n_namesz, n_descsz overflow issueWANG Chao2015-02-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When updating PT_NOTE header size (ie. p_memsz), an overflow issue happens with the following bogus note entry: n_namesz = 0xFFFFFFFF n_descsz = 0x0 n_type = 0x0 This kind of note entry should be dropped during updating p_memsz. But because n_namesz is 32bit, after (n_namesz + 3) & (~3), it's overflow to 0x0, the note entry size looks sane and reserved. When userspace (eg. crash utility) is trying to access such bogus note, it could lead to an unexpected behavior (eg. crash utility segment fault because it's reading bogus address). The source of bogus note hasn't been identified yet. At least we could drop the bogus note so user space wouldn't be surprised. Signed-off-by: WANG Chao <chaowang@redhat.com> Cc: Dave Anderson <anderson@redhat.com> Cc: Baoquan He <bhe@redhat.com> Cc: Randy Wright <rwright@hp.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Fabian Frederick <fabf@skynet.be> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Rashika Kheria <rashika.kheria@gmail.com> Cc: Greg Pearson <greg.pearson@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fs/proc/vmcore.c:mmap_vmcore: skip non-ram pages reported by hypervisorsVitaly Kuznetsov2014-08-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a special check in read_vmcore() handler to check if the page was reported as ram or not by the hypervisor (pfn_is_ram()). However, when vmcore is read with mmap() no such check is performed. That can lead to unpredictable results, e.g. when running Xen PVHVM guest memcpy() after mmap() on /proc/vmcore will hang processing HVMMEM_mmio_dm pages creating enormous load in both DomU and Dom0. Fix the issue by mapping each non-ram page to the zero page. Keep direct path with remap_oldmem_pfn_range() to avoid looping through all pages on bare metal. The issue can also be solved by overriding remap_oldmem_pfn_range() in xen-specific code, as remap_oldmem_pfn_range() was been designed for. That, however, would involve non-obvious xen code path for all x86 builds with CONFIG_XEN_PVHVM=y and would prevent all other hypervisor-specific code on x86 arch from doing the same override. [fengguang.wu@intel.com: remap_oldmem_pfn_checked() can be static] [akpm@linux-foundation.org: clean up layout] Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fs/proc/vmcore.c: remove NULL assignment to staticFabian Frederick2014-06-06
| | | | | | | | | Static values are automatically initialized to NULL. Signed-off-by: Fabian Frederick <fabf@skynet.be> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: continue vmcore initialization if PT_NOTE is found emptyWANG Chao2014-04-07
| | | | | | | | | | | | | | | | | | | | | Currently when an empty PT_NOTE is detected, vmcore initialization fails. It sounds too harsh. Because PT_NOTE could be empty, for example, one offlined a cpu but never restarted kdump service, and after crash, PT_NOTE program header is there but no data contains. It's better to warn about the empty PT_NOTE and continue to initialise vmcore. And ultimately the multiple PT_NOTE are merged into a single one, all empty PT_NOTE are discarded naturally during the merge. So empty PT_NOTE is not visible to user space and vmcore is as good as expected. Signed-off-by: WANG Chao <chaowang@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Greg Pearson <greg.pearson@hp.com> Cc: Baoquan He <bhe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* include/linux/crash_dump.h: add vmcore_cleanup() prototypeRashika Kheria2014-04-07
| | | | | | | | | | | | | Eliminate the following warning in proc/vmcore.c: fs/proc/vmcore.c:1088:6: warning: no previous prototype for `vmcore_cleanup' [-Wmissing-prototypes] [akpm@linux-foundation.org: clean up powerpc, remove unneeded EXPORT_SYMBOL] Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: prevent PT_NOTE p_memsz overflow during header updateGreg Pearson2014-02-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, update_note_header_size_elf64() and update_note_header_size_elf32() will add the size of a PT_NOTE entry to real_sz even if that causes real_sz to exceeds max_sz. This patch corrects the while loop logic in those routines to ensure that does not happen and prints a warning if a PT_NOTE entry is dropped. If zero PT_NOTE entries are found or this condition is encountered because the only entry was dropped, a warning is printed and an error is returned. One possible negative side effect of exceeding the max_sz limit is an allocation failure in merge_note_headers_elf64() or merge_note_headers_elf32() which would produce console output such as the following while booting the crash kernel. vmalloc: allocation failure: 14076997632 bytes swapper/0: page allocation failure: order:0, mode:0x80d2 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-gbp1 #7 Call Trace: dump_stack+0x19/0x1b warn_alloc_failed+0xf0/0x160 __vmalloc_node_range+0x19e/0x250 vmalloc_user+0x4c/0x70 merge_note_headers_elf64.constprop.9+0x116/0x24a vmcore_init+0x2d4/0x76c do_one_initcall+0xe2/0x190 kernel_init_freeable+0x17c/0x207 kernel_init+0xe/0x180 ret_from_fork+0x7c/0xb0 Kdump: vmcore not initialized kdump: dump target is /dev/sda4 kdump: saving to /sysroot//var/crash/127.0.0.1-2014.01.28-13:58:52/ kdump: saving vmcore-dmesg.txt Cannot open /proc/vmcore: No such file or directory kdump: saving vmcore-dmesg.txt failed kdump: saving vmcore kdump: saving vmcore failed This type of failure has been seen on a four socket prototype system with certain memory configurations. Most PT_NOTE sections have a single entry similar to: n_namesz = 0x5 n_descsz = 0x150 n_type = 0x1 Occasionally, a second entry is encountered with very large n_namesz and n_descsz sizes: n_namesz = 0x80000008 n_descsz = 0x510ae163 n_type = 0x80000008 Not yet sure of the source of these extra entries, they seem bogus, but they shouldn't cause crash dump to fail. Signed-off-by: Greg Pearson <greg.pearson@hp.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fs/proc: don't use module_init for non-modular core codePaul Gortmaker2014-01-23
| | | | | | | | | | | | | | | | | | | | | | | | PROC_FS is a bool, so this code is either present or absent. It will never be modular, so using module_init as an alias for __initcall is rather misleading. Fix this up now, so that we can relocate module_init from init.h into module.h in the future. If we don't do this, we'd have to add module.h to obviously non-modular code, and that would be ugly at best. Note that direct use of __initcall is discouraged, vs. one of the priority categorized subgroups. As __initcall gets mapped onto device_initcall, our use of fs_initcall (which makes sense for fs code) will thus change these registrations from level 6-device to level 5-fs (i.e. slightly earlier). However no observable impact of that small difference has been observed during testing, or is expected. Also note that this change uncovers a missing semicolon bug in the registration of vmcore_init as an initcall. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: enable /proc/vmcore mmap for s390Michael Holzheu2013-09-11
| | | | | | | | | | | | | | The patch "s390/vmcore: Implement remap_oldmem_pfn_range for s390" allows now to use mmap also on s390. So enable mmap for s390 again. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Jan Willeke <willeke@de.ibm.com> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: introduce remap_oldmem_pfn_range()Michael Holzheu2013-09-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For zfcpdump we can't map the HSA storage because it is only available via a read interface. Therefore, for the new vmcore mmap feature we have introduce a new mechanism to create mappings on demand. This patch introduces a new architecture function remap_oldmem_pfn_range() that should be used to create mappings with remap_pfn_range() for oldmem areas that can be directly mapped. For zfcpdump this is everything besides of the HSA memory. For the areas that are not mapped by remap_oldmem_pfn_range() a generic vmcore a new generic vmcore fault handler mmap_vmcore_fault() is called. This handler works as follows: * Get already available or new page from page cache (find_or_create_page) * Check if /proc/vmcore page is filled with data (PageUptodate) * If yes: Return that page * If no: Fill page using __vmcore_read(), set PageUptodate, and return page Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Jan Willeke <willeke@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: introduce ELF header in new memory featureMichael Holzheu2013-09-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For s390 we want to use /proc/vmcore for our SCSI stand-alone dump (zfcpdump). We have support where the first HSA_SIZE bytes are saved into a hypervisor owned memory area (HSA) before the kdump kernel is booted. When the kdump kernel starts, it is restricted to use only HSA_SIZE bytes. The advantages of this mechanism are: * No crashkernel memory has to be defined in the old kernel. * Early boot problems (before kexec_load has been done) can be dumped * Non-Linux systems can be dumped. We modify the s390 copy_oldmem_page() function to read from the HSA memory if memory below HSA_SIZE bytes is requested. Since we cannot use the kexec tool to load the kernel in this scenario, we have to build the ELF header in the 2nd (kdump/new) kernel. So with the following patch set we would like to introduce the new function that the ELF header for /proc/vmcore can be created in the 2nd kernel memory. The following steps are done during zfcpdump execution: 1. Production system crashes 2. User boots a SCSI disk that has been prepared with the zfcpdump tool 3. Hypervisor saves CPU state of boot CPU and HSA_SIZE bytes of memory into HSA 4. Boot loader loads kernel into low memory area 5. Kernel boots and uses only HSA_SIZE bytes of memory 6. Kernel saves registers of non-boot CPUs 7. Kernel does memory detection for dump memory map 8. Kernel creates ELF header for /proc/vmcore 9. /proc/vmcore uses this header for initialization 10. The zfcpdump user space reads /proc/vmcore to write dump to SCSI disk - copy_oldmem_page() copies from HSA for memory below HSA_SIZE - copy_oldmem_page() copies from real memory for memory above HSA_SIZE Currently for s390 we create the ELF core header in the 2nd kernel with a small trick. We relocate the addresses in the ELF header in a way that for the /proc/vmcore code it seems to be in the 1st kernel (old) memory and the read_from_oldmem() returns the correct data. This allows the /proc/vmcore code to use the ELF header in the 2nd kernel. This patch: Exchange the old mechanism with the new and much cleaner function call override feature that now offcially allows to create the ELF core header in the 2nd kernel. To use the new feature the following function have to be defined by the architecture backend code to read from new memory: * elfcorehdr_alloc: Allocate ELF header * elfcorehdr_free: Free the memory of the ELF header * elfcorehdr_read: Read from ELF header * elfcorehdr_read_notes: Read from ELF notes Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Jan Willeke <willeke@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* s390/kdump: Disable mmap for s390Michael Holzheu2013-07-18
| | | | | | | | | | | | | The kdump mmap patch series (git commit 83086978c63afd7c73e1c) directly map the PT_LOADs to memory. On s390 this does not work because the copy_from_oldmem() function swaps [0,crashkernel size] with [crashkernel base, crashkernel base+crashkernel size]. The swap int copy_from_oldmem() was done in order correctly implement /dev/oldmem. See: http://marc.info/?l=kexec&m=136940802511603&w=2 Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* vmcore: support mmap() on /proc/vmcoreHATAYAMA Daisuke2013-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces mmap_vmcore(). Don't permit writable nor executable mapping even with mprotect() because this mmap() is aimed at reading crash dump memory. Non-writable mapping is also requirement of remap_pfn_range() when mapping linear pages on non-consecutive physical pages; see is_cow_mapping(). Set VM_MIXEDMAP flag to remap memory by remap_pfn_range and by remap_vmalloc_range_pertial at the same time for a single vma. do_munmap() can correctly clean partially remapped vma with two functions in abnormal case. See zap_pte_range(), vm_normal_page() and their comments for details. On x86-32 PAE kernels, mmap() supports at most 16TB memory only. This limitation comes from the fact that the third argument of remap_pfn_range(), pfn, is of 32-bit length on x86-32: unsigned long. [akpm@linux-foundation.org: use min(), switch to conventional error-unwinding approach] Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: Lisa Mitchell <lisa.mitchell@hp.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Tested-by: Maxim Uvarov <muvarov@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: calculate vmcore file size from buffer size and total size of vmcore ↵HATAYAMA Daisuke2013-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | objects The previous patches newly added holes before each chunk of memory and the holes need to be count in vmcore file size. There are two ways to count file size in such a way: 1) suppose m is a poitner to the last vmcore object in vmcore_list. Then file size is (m->offset + m->size), or 2) calculate sum of size of buffers for ELF header, program headers, ELF note segments and objects in vmcore_list. Although 1) is more direct and simpler than 2), 2) seems better in that it reflects internal object structure of /proc/vmcore. Thus, this patch changes get_vmcore_size_elf{64, 32} so that it calculates size in the way of 2). As a result, both get_vmcore_size_elf{64, 32} have the same definition. Merge them as get_vmcore_size. Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: Lisa Mitchell <lisa.mitchell@hp.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: allow user process to remap ELF note segment bufferHATAYAMA Daisuke2013-07-03
| | | | | | | | | | | | | | | | | Now ELF note segment has been copied in the buffer on vmalloc memory. To allow user process to remap the ELF note segment buffer with remap_vmalloc_page, the corresponding VM area object has to have VM_USERMAP flag set. [akpm@linux-foundation.org: use the conventional comment layout] Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: Lisa Mitchell <lisa.mitchell@hp.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: allocate ELF note segment in the 2nd kernel vmalloc memoryHATAYAMA Daisuke2013-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The reasons why we don't allocate ELF note segment in the 1st kernel (old memory) on page boundary is to keep backward compatibility for old kernels, and that if doing so, we waste not a little memory due to round-up operation to fit the memory to page boundary since most of the buffers are in per-cpu area. ELF notes are per-cpu, so total size of ELF note segments depends on number of CPUs. The current maximum number of CPUs on x86_64 is 5192, and there's already system with 4192 CPUs in SGI, where total size amounts to 1MB. This can be larger in the near future or possibly even now on another architecture that has larger size of note per a single cpu. Thus, to avoid the case where memory allocation for large block fails, we allocate vmcore objects on vmalloc memory. This patch adds elfnotes_buf and elfnotes_sz variables to keep pointer to the ELF note segment buffer and its size. There's no longer the vmcore object that corresponds to the ELF note segment in vmcore_list. Accordingly, read_vmcore() has new case for ELF note segment and set_vmcore_list_offsets_elf{64,32}() and other helper functions starts calculating offset from sum of size of ELF headers and size of ELF note segment. [akpm@linux-foundation.org: use min(), fix error-path vzalloc() leaks] Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: Lisa Mitchell <lisa.mitchell@hp.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: treat memory chunks referenced by PT_LOAD program header entries in ↵HATAYAMA Daisuke2013-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | page-size boundary in vmcore_list Treat memory chunks referenced by PT_LOAD program header entries in page-size boundary in vmcore_list. Formally, for each range [start, end], we set up the corresponding vmcore object in vmcore_list to [rounddown(start, PAGE_SIZE), roundup(end, PAGE_SIZE)]. This change affects layout of /proc/vmcore. The gaps generated by the rearrangement are newly made visible to applications as holes. Concretely, they are two ranges [rounddown(start, PAGE_SIZE), start] and [end, roundup(end, PAGE_SIZE)]. Suppose variable m points at a vmcore object in vmcore_list, and variable phdr points at the program header of PT_LOAD type the variable m corresponds to. Then, pictorially: m->offset +---------------+ | hole | phdr->p_offset = +---------------+ m->offset + (paddr - start) | |\ | kernel memory | phdr->p_memsz | |/ +---------------+ | hole | m->offset + m->size +---------------+ where m->offset and m->offset + m->size are always page-size aligned. Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: Lisa Mitchell <lisa.mitchell@hp.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: allocate buffer for ELF headers on page-size alignmentHATAYAMA Daisuke2013-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate ELF headers on page-size boundary using __get_free_pages() instead of kmalloc(). Later patch will merge PT_NOTE entries into a single unique one and decrease the buffer size actually used. Keep original buffer size in variable elfcorebuf_sz_orig to kfree the buffer later and actually used buffer size with rounded up to page-size boundary in variable elfcorebuf_sz separately. The size of part of the ELF buffer exported from /proc/vmcore is elfcorebuf_sz. The merged, removed PT_NOTE entries, i.e. the range [elfcorebuf_sz, elfcorebuf_sz_orig], is filled with 0. Use size of the ELF headers as an initial offset value in set_vmcore_list_offsets_elf{64,32} and process_ptload_program_headers_elf{64,32} in order to indicate that the offset includes the holes towards the page boundary. As a result, both set_vmcore_list_offsets_elf{64,32} have the same definition. Merge them as set_vmcore_list_offsets. [akpm@linux-foundation.org: add free_elfcorebuf(), cleanups] Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: Lisa Mitchell <lisa.mitchell@hp.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: clean up read_vmcore()HATAYAMA Daisuke2013-07-03
| | | | | | | | | | | | | | | Rewrite part of read_vmcore() that reads objects in vmcore_list in the same way as part reading ELF headers, by which some duplicated and redundant codes are removed. Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp> Cc: Lisa Mitchell <lisa.mitchell@hp.com> Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* proc: Supply a function to remove a proc entry by PDEDavid Howells2013-05-01
| | | | | | | | | | | | | | | | | | Supply a function (proc_remove()) to remove a proc entry (and any subtree rooted there) by proc_dir_entry pointer rather than by name and (optionally) root dir entry pointer. This allows us to eliminate all remaining pde->name accesses outside of procfs. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Grant Likely <grant.likely@linaro.or> cc: linux-acpi@vger.kernel.org cc: openipmi-developer@lists.sourceforge.net cc: devicetree-discuss@lists.ozlabs.org cc: linux-pci@vger.kernel.org cc: netdev@vger.kernel.org cc: netfilter-devel@vger.kernel.org cc: alsa-devel@alsa-project.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* proc: Split kcore bits from linux/procfs.h into linux/kcore.hDavid Howells2013-04-29
| | | | | | | | | | | | | Split kcore bits from linux/procfs.h into linux/kcore.h. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> cc: linux-mips@linux-mips.org cc: sparclinux@vger.kernel.org cc: x86@kernel.org cc: linux-mm@kvack.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs/proc/vmcore.c: put if tests in the top of the while loop to reduce ↵Zhang Yanfei2013-02-27
| | | | | | | | | | | | | | | duplication In read_vmcore() two `if' tests are duplicated. Change the position of them could reduce the duplication. This change does not affect the behaviour of the function. [akpm@linux-foundation.org: avoid `if (foo = bar)' thing, use min_t()] [akpm@linux-foundation.org: s/max_t/min_t/] Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fs/proc: clean up printksAndrew Morton2013-02-27
| | | | | | | | | | | | - use pr_foo() throughout - remove a couple of duplicated KERN_WARNINGs, via WARN(KERN_WARNING "...") - nuke a few warnings which I've never seen happen, ever. Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fadump: Introduce cleanup routine to invalidate /proc/vmcore.Mahesh Salgaonkar2012-02-22
| | | | | | | | | | | | | | | | | With the firmware-assisted dump support we don't require a reboot when we are in second kernel after crash. The second kernel after crash is a normal kernel boot and has knowledge about entire system RAM with the page tables initialized for entire system RAM. Hence once the dump is saved to disk, we can just release the reserved memory area for general use and continue with second kernel as production kernel. Hence when we release the reserved memory that contains dump data, the '/proc/vmcore' will not be valid anymore. Hence this patch introduces a cleanup routine that invalidates and removes the /proc/vmcore file. This routine will be invoked before we release the reserved dump memory area. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* fs: add export.h to files using EXPORT_SYMBOL/THIS_MODULE macrosPaul Gortmaker2011-10-31
| | | | | | | | | | These files were getting <linux/module.h> via an implicit include path, but we want to crush those out of existence since they cost time during compiles of processing thousands of lines of headers for no reason. Give them the lightweight header that just contains the EXPORT_SYMBOL infrastructure. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* fs/proc/vmcore.c: add hook to read_from_oldmem() to check for non-ram pagesOlaf Hering2011-05-26
| | | | | | | | | | | | | | | | | | | | | | The balloon driver in a Xen guest frees guest pages and marks them as mmio. When the kernel crashes and the crash kernel attempts to read the oldmem via /proc/vmcore a read from ballooned pages will generate 100% load in dom0 because Xen asks qemu-dm for the page content. Since the reads come in as 8byte requests each ballooned page is tried 512 times. With this change a hook can be registered which checks wether the given pfn is really ram. The hook has to return a value > 0 for ram pages, a value < 0 on error (because the hypercall is not known) and 0 for non-ram pages. This will reduce the time to read /proc/vmcore. Without this change a 512M guest with 128M crashkernel region needs 200 seconds to read it, with this change it takes just 2 seconds. Signed-off-by: Olaf Hering <olaf@aepfle.de> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ARM: 6485/5: proc/vmcore - allow archs to override vmcore_elf_check_arch()Mika Westerberg2010-11-30
| | | | | | | | | | | | | Allow architectures to redefine this macro if needed. This is useful for example in architectures where 64-bit ELF vmcores are not supported. Specifying zero vmcore_elf64_check_arch() allows compiler to optimize away unnecessary parts of parse_crash_elf64_headers(). We also rename the macro to vmcore_elf64_check_arch() to reflect that it is used for 64-bit vmcores only. Signed-off-by: Mika Westerberg <mika.westerberg@iki.fi> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* /proc/vmcore: fix seekingArnd Bergmann2010-09-22
| | | | | | | | | | | | | | | | | | | | Commit 73296bc611 ("procfs: Use generic_file_llseek in /proc/vmcore") broke seeking on /proc/vmcore. This changes it back to use default_llseek in order to restore the original behaviour. The problem with generic_file_llseek is that it only allows seeks up to inode->i_sb->s_maxbytes, which is zero on procfs and some other virtual file systems. We should merge generic_file_llseek and default_llseek some day and clean this up in a proper way, but for 2.6.35/36, reverting vmcore is the safer solution. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Reported-by: CAI Qian <caiqian@redhat.com> Tested-by: CAI Qian <caiqian@redhat.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'bkl/procfs' of ↵Linus Torvalds2010-05-19
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing * 'bkl/procfs' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing: sunrpc: Include missing smp_lock.h procfs: Kill the bkl in ioctl procfs: Push down the bkl from ioctl procfs: Use generic_file_llseek in /proc/vmcore procfs: Use generic_file_llseek in /proc/kmsg procfs: Use generic_file_llseek in /proc/kcore procfs: Kill BKL in llseek on proc base
| * procfs: Use generic_file_llseek in /proc/vmcoreFrederic Weisbecker2010-04-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | /proc/vmcore has no llseek and then falls down to use default_llseek. This is racy against read_vmcore() that directly manipulates fpos but it doesn't hold the bkl there so using it in llseek doesn't protect anything. Let's use generic_file_llseek() instead. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: John Kacur <jkacur@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Al Viro <viro@ZenIV.linux.org.uk>
* | include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo2010-03-30
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
* proc: vmcore - use kzalloc in get_new_element()Cyrill Gorcunov2009-06-18
| | | | | | | | | | Instead of kmalloc+memset better use straight kzalloc Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmcore: remove saved_max_pfn checkMagnus Damm2009-01-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the saved_max_pfn check from the /proc/vmcore function read_from_oldmem(). No need to verify, we should be able to just trust that "elfcorehdr=" is correctly passed to the crash kernel on the kernel command line like we do with other parameters. The read_from_oldmem() function in fs/proc/vmcore.c is quite similar to read_from_oldmem() in drivers/char/mem.c, but only in the latter it makes sense to use saved_max_pfn. For oldmem it is used to determine when to stop reading. For vmcore we already have the elf header info pointing out the physical memory regions, no need to pass the end-of- old-memory twice. Removing the saved_max_pfn check from vmcore makes it possible for architectures to skip oldmem but still support crash dump through vmcore - without the need for the old saved_max_pfn cruft. Architectures that want to play safe can do the saved_max_pfn check in copy_oldmem_page(). Not sure why anyone would want to do that, but that's even safer than today - the saved_max_pfn check in vmcore removed by this patch only checks the first page. Signed-off-by: Magnus Damm <damm@igel.co.jp> Acked-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Simon Horman <horms@verge.net.au> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* proc: move /proc/vmcore creation to fs/proc/vmcore.cAlexey Dobriyan2008-10-23
| | | | Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
* kdump: add is_vmcore_usable() and vmcore_unusable()Simon Horman2008-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The usage of elfcorehdr_addr has changed recently such that being set to ELFCORE_ADDR_MAX is used by is_kdump_kernel() to indicate if the code is executing in a kernel executed as a crash kernel. However, arch/ia64/kernel/setup.c:reserve_elfcorehdr will rest elfcorehdr_addr to ELFCORE_ADDR_MAX on error, which means any subsequent calls to is_kdump_kernel() will return 0, even though they should return 1. Ok, at this point in time there are no subsequent calls, but I think its fair to say that there is ample scope for error or at the very least confusion. This patch add an extra state, ELFCORE_ADDR_ERR, which indicates that elfcorehdr_addr was passed on the command line, and thus execution is taking place in a crashdump kernel, but vmcore can't be used for some reason. This is tested for using is_vmcore_usable() and set using vmcore_unusable(). A subsequent patch makes use of this new code. To summarise, the states that elfcorehdr_addr can now be in are as follows: ELFCORE_ADDR_MAX: not a crashdump kernel ELFCORE_ADDR_ERR: crashdump kernel but vmcore is unusable any other value: crash dump kernel and vmcore is usable Signed-off-by: Simon Horman <horms@verge.net.au> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* kdump: make elfcorehdr_addr independent of CONFIG_PROC_VMCOREVivek Goyal2008-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o elfcorehdr_addr is used by not only the code under CONFIG_PROC_VMCORE but also by the code which is not inside CONFIG_PROC_VMCORE. For example, is_kdump_kernel() is used by powerpc code to determine if kernel is booting after a panic then use previous kernel's TCE table. So even if CONFIG_PROC_VMCORE is not set in second kernel, one should be able to correctly determine that we are booting after a panic and setup calgary iommu accordingly. o So remove the assumption that elfcorehdr_addr is under CONFIG_PROC_VMCORE. o Move definition of elfcorehdr_addr to arch dependent crash files. (Unfortunately crash dump does not have an arch independent file otherwise that would have been the best place). o kexec.c is not the right place as one can Have CRASH_DUMP enabled in second kernel without KEXEC being enabled. o I don't see sh setup code parsing the command line for elfcorehdr_addr. I am wondering how does vmcore interface work on sh. Anyway, I am atleast defining elfcoredhr_addr so that compilation is not broken on sh. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: Simon Horman <horms@verge.net.au> Acked-by: Paul Mundt <lethal@linux-sh.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* proc: remove dummy vmcore_open()Alexey Dobriyan2008-10-09
| | | | | | Empty ->open is equivalent to always succeeding ->open. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
* aout: remove unnecessary inclusions of {asm, linux}/a.out.hDavid Howells2008-02-08
| | | | | | | | | | Remove now unnecessary inclusions of {asm,linux}/a.out.h. [akpm@linux-foundation.org: fix alpha build] Signed-off-by: David Howells <dhowells@redhat.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] i386: Allow i386 crash kernels to handle x86_64 dumpsIan Campbell2007-05-02
| | | | | | | | | | | | | | | | | | | The specific case I am encountering is kdump under Xen with a 64 bit hypervisor and 32 bit kernel/userspace. The dump created is 64 bit due to the hypervisor but the dump kernel is 32 bit for maximum compatibility. It's possibly less likely to be useful in a purely native scenario but I see no reason to disallow it. [akpm@linux-foundation.org: build fix] Signed-off-by: Ian Campbell <ian.campbell@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: Vivek Goyal <vgoyal@in.ibm.com> Cc: Horms <horms@verge.net.au> Cc: Magnus Damm <magnus.damm@gmail.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-30
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [PATCH] kdump proc vmcore size oveflow fixVivek Goyal2006-04-11
| | | | | | | | | | | A couple of /proc/vmcore data structures overflow with 32bit systems having memory more than 4G. This patch fixes those. Signed-off-by: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Make most file operations structs in fs/ constArjan van de Ven2006-03-28
| | | | | | | | | | | | | | This is a conversion to make the various file_operations structs in fs/ const. Basically a regexp job, with a few manual fixups The goal is both to increase correctness (harder to accidentally write to shared datastructures) and reducing the false sharing of cachelines with things that get dirty in .data (while .rodata is nicely read only and thus cache clean) Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kdump: vmcore compilation warning fixVivek Goyal2006-01-11
| | | | | | | | | | | | | o fs/proc/vmcore.c compilation gives warnings on ppc64. The reason being that u64 is defined as unsigned long hence u64* is not same as loff_t* and compiler cribs. o Changed the parameter type to u64* instead of loff_t* to resolve the conflict. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* fs/proc/vmcore.c: header included twiceNicolas Kaiser2006-01-10
| | | | | | | Header included twice. Signed-off-by: Nicolas Kaiser <nikai@nikai.net> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [PATCH] kdump: read previous kernel's memoryVivek Goyal2006-01-10
| | | | | | | | | | | | | | | | - Moving the crash_dump.c file to arch dependent part as kmap_atomic_pfn is specific to i386 and highmem may not exist in other archs. - Use ioremap for x86_64 to map the previous kernel memory. - In copy_oldmem_page(), we now directly copy to the user/kernel buffer and avoid the unneccesary copy to a kmalloc'd page. Signed-off-by: Rachita Kothiyal <rachita@in.ibm.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kdump: Parse elf32 headers and export through /proc/vmcoreVivek Goyal2005-06-25
| | | | | | | | | | | | o Adds support for parsing core ELF32 headers. o I am expecting ELF32 support to go away down the line. This patch has been introduced for testing purposes as gdb can not parse ELF64 headers for i386. When a decent user space solution is available, ELF32 support can go away. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kdump: Access dump file in elf format (/proc/vmcore)Vivek Goyal2005-06-25
From: "Vivek Goyal" <vgoyal@in.ibm.com> o Support for /proc/vmcore interface. This interface exports elf core image either in ELF32 or ELF64 format, depending on the format in which elf headers have been stored by crashed kernel. o Added support for CONFIG_VMCORE config option. o Removed the dependency on /proc/kcore. From: "Eric W. Biederman" <ebiederm@xmission.com> This patch has been refactored to more closely match the prevailing style in the affected files. And to clearly indicate the dependency between /proc/kcore and proc/vmcore.c From: Hariprasad Nellitheertha <hari@in.ibm.com> This patch contains the code that provides an ELF format interface to the previous kernel's memory post kexec reboot. Signed off by Hariprasad Nellitheertha <hari@in.ibm.com> Signed-off-by: Eric Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>