================================== Cache and TLB Flushing Under Linux ================================== :Author: David S. Miller This document describes the cache/tlb flushing interfaces called by the Linux VM subsystem. It enumerates over each interface, describes its intended purpose, and what side effect is expected after the interface is invoked. The side effects described below are stated for a uniprocessor implementation, and what is to happen on that single processor. The SMP cases are a simple extension, in that you just extend the definition such that the side effect for a particular interface occurs on all processors in the system. Don't let this scare you into thinking SMP cache/tlb flushing must be so inefficient, this is in fact an area where many optimizations are possible. For example, if it can be proven that a user address space has never executed on a cpu (see mm_cpumask()), one need not perform a flush for this address space on that cpu. First, the TLB flushing interfaces, since they are the simplest. The "TLB" is abstracted under Linux as something the cpu uses to cache virtual-->physical address translations obtained from the software page tables. Meaning that if the software page tables change, it is possible for stale translations to exist in this "TLB" cache. Therefore when software page table changes occur, the kernel will invoke one of the following flush methods _after_ the page table changes occur: 1) ``void flush_tlb_all(void)`` The most severe flush of all. After this interface runs, any previous page table modification whatsoever will be visible to the cpu. This is usually invoked when the kernel page tables are changed, since such translations are "global" in nature. 2) ``void flush_tlb_mm(struct mm_struct *mm)`` This interface flushes an entire user address space from the TLB. After running, this interface must make sure that any previous page table modifications for the address space 'mm' will be visible to the cpu. That is, after running, there will be no entries in the TLB for 'mm'. This interface is used to handle whole address space page table operations such as what happens during fork, and exec. 3) ``void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)`` Here we are flushing a specific range of (user) virtual address translations from the TLB. After running, this interface must make sure that any previous page table modifications for the address space 'vma->vm_mm' in the range 'start' to 'end-1' will be visible to the cpu. That is, after running, there will be no entries in the TLB for 'mm' for virtual addresses in the range 'start' to 'end-1'. The "vma" is the backing store being used for the region. Primarily, this is used for munmap() type operations. The interface is provided in hopes that the port can find a suitably efficient method for removing multiple page sized translations from the TLB, instead of having the kernel call flush_tlb_page (see below) for each entry which may be modified. 4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)`` This time we need to remove the PAGE_SIZE sized translation from the TLB. The 'vma' is the backing structure used by Linux to keep track of mmap'd regions for a process, the address space is available via vma->vm_mm. Also, one may test (vma->vm_flags & VM_EXEC) to see if this region is executable (and thus could be in the 'instruction TLB' in split-tlb type setups). After running, this interface must make sure that any previous page table modification for address space 'vma->vm_mm' for user virtual address 'addr' will be visible to the cpu. That is, after running, there will be no entries in the TLB for 'vma->vm_mm' for virtual address 'addr'. This is used primarily during fault processing. 5) ``void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)`` At the end of every page fault, this routine is invoked to tell the architecture specific code that a translation now exists at virtual address "address" for address space "vma->vm_mm", in the software page tables. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB translations for software managed TLB configurations. The sparc64 port currently does this. 6) ``void tlb_migrate_finish(struct mm_struct *mm)`` This interface is called at the end of an explicit process migration. This interface provides a hook to allow a platform to update TLB or context-specific information for the address space. The ia64 sn2 platform is one example of a platform that uses this interface. Next, we have the cache flushing interfaces. In general, when Linux is changing an existing virtual-->physical mapping to a new value, the sequence will be in one of the following forms:: 1) flush_cache_mm(mm); change_all_page_tables_of(mm); flush_tlb_mm(mm); 2) flush_cache_range(vma, start, end); change_range_of_page_tables(mm, start, end); flush_tlb_range(vma, start, end); 3) flush_cache_page(vma, addr, pfn); set_pte(pte_pointer, new_pte_val); flush_tlb_page(vma, addr); The cache level flush will always be first, because this allows us to properly handle systems whose caches are strict and require a virtual-->physical translation to exist for a virtual address when that virtual address is flushed from the cache. The HyperSparc cpu is one such cpu with this attribute. The cache flushing routines below need only deal with cache flushing to the extent that it is necessary for a particular cpu. Mostly, these routines must be implemented for cpus which have virtually indexed caches which must be flushed when virtual-->physical translations are changed or removed. So, for example, the physically indexed physically tagged caches of IA32 processors have no need to implement these interfaces since the caches are fully synchronized and have no dependency on translation information. Here are the routines, one by one: 1) ``void flush_cache_mm(struct mm_struct *mm)`` This interface flushes an entire user address space from the caches. That is, after running, there will be no cache lines associated with 'mm'. This interface is used to handle whole address space page table operations such as what happens during exit and exec. 2) ``void flush_cache_dup_mm(struct mm_struct *mm)`` This interface flushes an entire user address space from the caches. That is, after running, there will be no cache lines associated with 'mm'. This interface is used to handle whole address space page table operations such as what happens during fork. This option is separate from flush_cache_mm to allow some optimizations for VIPT caches. 3) ``void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)`` Here we are flushing a specific range of (user) virtual addresses from the cache. After running, there will be no entries in the cache for 'vma->vm_mm' for virtual addresses in the range 'start' to 'end-1'. The "vma" is the backing store being used for the region. Primarily, this is used for munmap() type operations. The interface is provided in hopes that the port can find a suitably efficient method for removing multiple page sized regions from the cache, instead of having the kernel call flush_cache_page (see below) for each entry which may be modified. 4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)`` This time we need to remove a PAGE_SIZE sized range from the cache. The 'vma' is the backing structure used by Linux to keep track of mmap'd regions for a process, the address space is available via vma->vm_mm. Also, one may test (vma->vm_flags & VM_EXEC) to see if this region is executable (and thus could be in the 'instruction cache' in "Harvard" type cache layouts). The 'pfn' indicates the physical page frame (shift this value left by PAGE_SHIFT to get the physical address) that 'addr' translates to. It is this mapping which should be removed from the cache. After running, there will be no entries in the cache for 'vma->vm_mm' for virtual address 'addr' which translates to 'pfn'. This is used primarily during fault processing. 5) ``void flush_cache_kmaps(void)`` This routine need only be implemented if the platform utilizes highmem. It will be called right before all of the kmaps are invalidated. After running, there will be no entries in the cache for the kernel virtual address range PKMAP_ADDR(0) to PKMAP_ADDR(LAST_PKMAP). This routing should be implemented in asm/highmem.h 6) ``void flush_cache_vmap(unsigned long start, unsigned long end)`` ``void flush_cache_vunmap(unsigned long start, unsigned long end)`` Here in these two interfaces we are flushing a specific range of (kernel) virtual addresses from the cache. After running, there will be no entries in the cache for the kernel address space for virtual addresses in the range 'start' to 'end-1'. The first of these two routines is invoked after map_vm_area() has installed the page table entries. T/* * Copyright (c) 2017, Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and * only version 2 as published by the Free Software Foundation. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */ #include <linux/kernel.h> #include <linux/module.h> #include <linux/io.h> #include <linux/slab.h> #include <linux/of.h> #include <linux/of_platform.h> #include <linux/platform_device.h> #include <linux/regmap.h> #include <linux/mailbox_controller.h> #define QCOM_APCS_IPC_BITS 32 struct qcom_apcs_ipc { struct mbox_controller mbox; struct mbox_chan mbox_chans[QCOM_APCS_IPC_BITS]; struct regmap *regmap; unsigned long offset; struct platform_device *clk; }; static const struct regmap_config apcs_regmap_config = { .reg_bits = 32, .reg_stride = 4, .val_bits = 32, .max_register = 0x1000, .fast_io = true, }; static int qcom_apcs_ipc_send_data(struct mbox_chan *chan, void *data) { struct qcom_apcs_ipc *apcs = container_of(chan->mbox, struct qcom_apcs_ipc, mbox); unsigned long idx = (unsigned long)chan->con_priv; return regmap_write(apcs->regmap, apcs->offset, BIT(idx)); } static const struct mbox_chan_ops qcom_apcs_ipc_ops = { .send_data = qcom_apcs_ipc_send_data, }; static int qcom_apcs_ipc_probe(struct platform_device *pdev) { struct device_node *np = pdev->dev.of_node; struct qcom_apcs_ipc *apcs; struct regmap *regmap; struct resource *res; unsigned long offset; void __iomem *base; unsigned long i; int ret; apcs = devm_kzalloc(&pdev->dev, sizeof(*apcs), GFP_KERNEL); if (!apcs) return -ENOMEM; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); base = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(base)) return PTR_ERR(base); regmap = devm_regmap_init_mmio(&pdev->dev, base, &apcs_regmap_config); if (IS_ERR(regmap)) return PTR_ERR(regmap); offset = (unsigned long)of_device_get_match_data(&pdev->dev); apcs->regmap = regmap; apcs->offset = offset; /* Initialize channel identifiers */ for (i = 0; i < ARRAY_SIZE(apcs->mbox_chans