This article is shared from Huawei cloud community< The ultimate performance of real-time data synchronization using MRS CDL >, author: zhushy.
Virtual real mapping means that the system maps the virtual address (VA) of the process space with the actual physical address (PA) through the Memory Management Unit (MMU), and specifies the corresponding access rights, cache attributes, etc. During program execution, the CPU accesses the virtual memory, finds the mapped physical memory through the MMU, and performs corresponding code execution or data reading and writing operations. The mapping of MMU is described by Page Table, which stores the mapping relationship between virtual address and physical address and access rights. Each process will create a Page Table when it is created. The Page Table is composed of Page Table entries (PTE). Each Page Table Entry describes the mapping relationship between the virtual address interval and the physical address interval. The starting address of the storage location of Page Table data in the memory area is called the translation table base (ttb). MMU has a Page Table cache called translation lookaside buffers (TLB). When doing address translation, MMU first looks up in TLB. If it finds the corresponding Page Table Entry, it can directly convert it, which improves the query efficiency.
The source code involved in this article, taking the OpenHarmony LiteOS-A kernel as an example, can be found on the open source site https://gitee.com/openharmony/kernel_liteos_a obtain. If the development board is involved, it defaults to hispark_ Take Taurus as an example. MMU related operation functions are mainly in the file arch/arm/arm/src/los_arch_mmu.c.
Virtual real mapping is actually a process of creating page tables. MMU supports multi-level page tables, and LiteOS-A kernel uses two-level page tables to describe process space. First, the next level page table and the second level page table are introduced.
1. Primary page table L1 and secondary page table L2
The L1 page table divides all 4GiB address spaces into 4096 copies, each with a size of 1MiB. Each page corresponds to a 32-bit page table entry, which contains the base address of L2 page table or the base address of a 1MiB physical memory. The upper 12 bits of memory record page number, which is used to locate page table items, that is, the index of 4096 page items; The lower 20 bits record the intra page offset value, and the virtual and real address intra page offset values are equal. Use the virtual page number in the virtual address to query the page table to obtain the corresponding physical page number, and then form the physical address with the in page displacement in the virtual address.
For user processes, each level-1 page table entry descriptor occupies 4 bytes, which can represent the mapping relationship of 1MiB memory space, that is, 1024 virtual memory space of 1GiB user space (1GiB user space in LiteOS-A kernel). When the system creates a user process, it applies for a 4KiB memory block in memory as the storage area of the primary page table. The system will dynamically apply for memory as the storage area of the secondary page table according to the needs of the current process. Now we know that in the virtual memory chapter, the user process virtual address space initialization function OsCreateUserVmSpace applied for 4KiB of memory as the basis for the page table storage area. Each user process needs to apply for a page table address of bytes. For kernel processes, the page table storage area is fixed, that is, UINT8 g_firstPageTable[0x4000], with a size of 16KiB.
The lower 2 bits of L1 page table entry are used to define the type of page table entry. There are three types of page table descriptor:
- Invalid invalid page table entry. If the virtual address is not mapped to the physical address, a page missing exception will be generated when accessing;
- Page Table refers to the Page Table item of L2 Page Table;
- The Section Section page table entry corresponds to a 1M section. The physical address can be obtained by directly using the highest 12 bits of the page table entry instead of the upper 12 bits of the virtual address.
The L2 page table continues to divide the address range of 1MiB into 256 small pages according to the memory page size of 4KiB. The upper 20 bits of memory record page number, which is used to locate page table items; The offset value in the lower 12 bit record page is equal to the offset value in the virtual and real address page. Use the virtual page number in the virtual address to query the page table to obtain the corresponding physical page number, and then form the physical address with the in page displacement in the virtual address. Each L2 page table entry converts 4K virtual memory address to physical address.
There are four types of L2 page table descriptors:
- Invalid invalid page table entry. If the virtual address is not mapped to the physical address, a page missing exception will be generated when accessing;
- Large Page table item, supporting 64Kib large pages, temporarily not supported;
- Small Page small page table item, which supports secondary page table mapping of 4Kib small pages;
- Small Page XN small page table entry extension.
In the file arch / arm / arm / include / Los_ mmu_ descriptor_ The descriptor type of page table is defined in V6. H, and the code is as follows:
/* L1 descriptor type */ #define MMU_DESCRIPTOR_L1_TYPE_INVALID (0x0 << 0) #define MMU_DESCRIPTOR_L1_TYPE_PAGE_TABLE (0x1 << 0) #define MMU_DESCRIPTOR_L1_TYPE_SECTION (0x2 << 0) #define MMU_DESCRIPTOR_L1_TYPE_MASK (0x3 << 0) /* L2 descriptor type */ #define MMU_DESCRIPTOR_L2_TYPE_INVALID (0x0 << 0) #define MMU_DESCRIPTOR_L2_TYPE_LARGE_PAGE (0x1 << 0) #define MMU_DESCRIPTOR_L2_TYPE_SMALL_PAGE (0x2 << 0) #define MMU_DESCRIPTOR_L2_TYPE_SMALL_PAGE_XN (0x3 << 0) #define MMU_DESCRIPTOR_L2_TYPE_MASK (0x3 << 0)
Table item operation on page 1.2
In the file arch/arm/arm/include/los_pte_ops.h defines operations related to page table entries.
1.2.1 function OsGetPte1
The function OsGetPte1 is used to obtain the L1 page entry address corresponding to the specified virtual address. The L1 page entry address consists of the page entry base address plus the page entry index, where the page entry index is equal to the upper 12 bits of the virtual address.
STATIC INLINE UINT32 OsGetPte1Index(vaddr_t va) { return va >> MMU_DESCRIPTOR_L1_SMALL_SHIFT; } STATIC INLINE PTE_T *OsGetPte1Ptr(PTE_T *pte1BasePtr, vaddr_t va) { return (pte1BasePtr + OsGetPte1Index(va)); } STATIC INLINE PTE_T OsGetPte1(PTE_T *pte1BasePtr, vaddr_t va) { return *OsGetPte1Ptr(pte1BasePtr, va); }
1.2.2 function OsGetPte2
The function OsGetPte2 is used to obtain the L2 page entry address corresponding to the specified virtual address. The L2 page entry address consists of the page entry base address plus the page entry index, where the page entry index is equal to the upper 20 bits of the virtual address after 1MiB. (why VA% mmu_descriptor_l1_small_size? TODO).
STATIC INLINE UINT32 OsGetPte2Index(vaddr_t va) { return (va % MMU_DESCRIPTOR_L1_SMALL_SIZE) >> MMU_DESCRIPTOR_L2_SMALL_SHIFT; } STATIC INLINE PTE_T OsGetPte2(PTE_T *pte2BasePtr, vaddr_t va) { return *(pte2BasePtr + OsGetPte2Index(va)); }
2. Virtual Map initialization
In the file kernel / base / VM / Los_ vm_ The system memory initialization function OsSysMemInit() of boot. C will call the virtual real mapping initialization function osinitmapingstartup(). The code is defined in arch/arm/arm/src/los_arch_mmu.c, the code is as follows. (1) the TLB is invalidated by the function, involving some cp15 registers and assembly, which will be analyzed later. (2) switch the function to temporary TTV. ⑶ set the mapping of kernel address space at. These function codes are detailed below.
VOID OsInitMappingStartUp(VOID) { ⑴ OsArmInvalidateTlbBarrier(); ⑵ OsSwitchTmpTTB(); ⑶ OsSetKSectionAttr(KERNEL_VMM_BASE, FALSE); OsSetKSectionAttr(UNCACHED_VMM_BASE, TRUE); OsKSectionNewAttrEnable(); }
2.1 function OsSwitchTmpTTB
(1) obtain the kernel address space at. The L1 page table item consists of 4096 page table items, each 4Kib, with a total size of 16Kib. Therefore, the code at (2) is aligned according to 16Kib, and a memory of 16Kib size is applied to store L1 page table entries. (3) set the translation table base (ttb) of the kernel virtual memory address space at. (4) handle G_ The firstpagetable data is copied to the translation table of the kernel address space. If the copy fails, use G directly_ firstPageTable. (5) set the physical memory base address of the kernel virtual address space, and then write it to the MMU register.
STATIC VOID OsSwitchTmpTTB(VOID) { PTE_T *tmpTtbase = NULL; errno_t err; ⑴ LosVmSpace *kSpace = LOS_GetKVmSpace(); /* ttbr address should be 16KByte align */ ⑵ tmpTtbase = LOS_MemAllocAlign(m_aucSysMem0, MMU_DESCRIPTOR_L1_SMALL_ENTRY_NUMBERS, MMU_DESCRIPTOR_L1_SMALL_ENTRY_NUMBERS); if (tmpTtbase == NULL) { VM_ERR("memory alloc failed"); return; } ⑶ kSpace->archMmu.virtTtb = tmpTtbase; ⑷ err = memcpy_s(kSpace->archMmu.virtTtb, MMU_DESCRIPTOR_L1_SMALL_ENTRY_NUMBERS, g_firstPageTable, MMU_DESCRIPTOR_L1_SMALL_ENTRY_NUMBERS); if (err != EOK) { (VOID)LOS_MemFree(m_aucSysMem0, tmpTtbase); kSpace->archMmu.virtTtb = (VADDR_T *)g_firstPageTable; VM_ERR("memcpy failed, errno: %d", err); return; } ⑸ kSpace->archMmu.physTtb = LOS_PaddrQuery(kSpace->archMmu.virtTtb); OsArmWriteTtbr0(kSpace->archMmu.physTtb | MMU_TTBRx_FLAGS); ISB; }
2.2 function OsSetKSectionAttr
The internal function OsSetKSectionAttr is used to set the attributes of the kernel virtual address space segment, respectively for [KERNEL_ASPACE_BASE,KERNEL_ASPACE_BASE+KERNEL_ASPACE_SIZE] and [UNCACHED_VMM_BASE,UNCACHED_VMM_BASE+UNCACHED_VMM_SIZE]. The kernel virtual address space is fixedly mapped to physical memory.
(1) calculate the offset relative to the base address of the kernel virtual address space at. (2) first calculate the text, rodata and data of the relative offset value_ The virtual memory address of BSS segments, and then create the virtual real mapping relationship of these segments. (3) set the virtual conversion base address and physical conversion base address of the kernel virtual address interval at. Then, the virtual real mapping of the virtual address is released. (4) perform virtual real mapping on the memory interval before the text segment according to the specified label. (5) map text, rodata, data at_ BSS segment and call the function LOS_VmSpaceReserve reserves an address range in the process space (why TODO?). (6) is the heap area behind the BSS segment, which maps the memory heap interval of the virtual address space.
STATIC VOID OsSetKSectionAttr(UINTPTR virtAddr, BOOL uncached) { ⑴ UINT32 offset = virtAddr - KERNEL_VMM_BASE; /* every section should be page aligned */ ⑵ UINTPTR textStart = (UINTPTR)&__text_start + offset; UINTPTR textEnd = (UINTPTR)&__text_end + offset; UINTPTR rodataStart = (UINTPTR)&__rodata_start + offset; UINTPTR rodataEnd = (UINTPTR)&__rodata_end + offset; UINTPTR ramDataStart = (UINTPTR)&__ram_data_start + offset; UINTPTR bssEnd = (UINTPTR)&__bss_end + offset; UINT32 bssEndBoundary = ROUNDUP(bssEnd, MB); LosArchMmuInitMapping mmuKernelMappings[] = { { .phys = SYS_MEM_BASE + textStart - virtAddr, .virt = textStart, .size = ROUNDUP(textEnd - textStart, MMU_DESCRIPTOR_L2_SMALL_SIZE), .flags = VM_MAP_REGION_FLAG_PERM_READ | VM_MAP_REGION_FLAG_PERM_EXECUTE, .name = "kernel_text" }, { .phys = SYS_MEM_BASE + rodataStart - virtAddr, .virt = rodataStart, .size = ROUNDUP(rodataEnd - rodataStart, MMU_DESCRIPTOR_L2_SMALL_SIZE), .flags = VM_MAP_REGION_FLAG_PERM_READ, .name = "kernel_rodata" }, { .phys = SYS_MEM_BASE + ramDataStart - virtAddr, .virt = ramDataStart, .size = ROUNDUP(bssEndBoundary - ramDataStart, MMU_DESCRIPTOR_L2_SMALL_SIZE), .flags = VM_MAP_REGION_FLAG_PERM_READ | VM_MAP_REGION_FLAG_PERM_WRITE, .name = "kernel_data_bss" } }; LosVmSpace *kSpace = LOS_GetKVmSpace(); status_t status; UINT32 length; int i; LosArchMmuInitMapping *kernelMap = NULL; UINT32 kmallocLength; UINT32 flags; /* use second-level mapping of default READ and WRITE */ ⑶ kSpace->archMmu.virtTtb = (PTE_T *)g_firstPageTable; kSpace->archMmu.physTtb = LOS_PaddrQuery(kSpace->archMmu.virtTtb); status = LOS_ArchMmuUnmap(&kSpace->archMmu, virtAddr, (bssEndBoundary - virtAddr) >> MMU_DESCRIPTOR_L2_SMALL_SHIFT); if (status != ((bssEndBoundary - virtAddr) >> MMU_DESCRIPTOR_L2_SMALL_SHIFT)) { VM_ERR("unmap failed, status: %d", status); return; } flags = VM_MAP_REGION_FLAG_PERM_READ | VM_MAP_REGION_FLAG_PERM_WRITE | VM_MAP_REGION_FLAG_PERM_EXECUTE; if (uncached) { flags |= VM_MAP_REGION_FLAG_UNCACHED; } ⑷ status = LOS_ArchMmuMap(&kSpace->archMmu, virtAddr, SYS_MEM_BASE, (textStart - virtAddr) >> MMU_DESCRIPTOR_L2_SMALL_SHIFT, flags); if (status != ((textStart - virtAddr) >> MMU_DESCRIPTOR_L2_SMALL_SHIFT)) { VM_ERR("mmap failed, status: %d", status); return; } ⑸ length = sizeof(mmuKernelMappings) / sizeof(LosArchMmuInitMapping); for (i = 0; i < length; i++) { kernelMap = &mmuKernelMappings[i]; if (uncached) { kernelMap->flags |= VM_MAP_REGION_FLAG_UNCACHED; } status = LOS_ArchMmuMap(&kSpace->archMmu, kernelMap->virt, kernelMap->phys, kernelMap->size >> MMU_DESCRIPTOR_L2_SMALL_SHIFT, kernelMap->flags); if (status != (kernelMap->size >> MMU_DESCRIPTOR_L2_SMALL_SHIFT)) { VM_ERR("mmap failed, status: %d", status); return; } LOS_VmSpaceReserve(kSpace, kernelMap->size, kernelMap->virt); } ⑹ kmallocLength = virtAddr + SYS_MEM_SIZE_DEFAULT - bssEndBoundary; flags = VM_MAP_REGION_FLAG_PERM_READ | VM_MAP_REGION_FLAG_PERM_WRITE; if (uncached) { flags |= VM_MAP_REGION_FLAG_UNCACHED; } status = LOS_ArchMmuMap(&kSpace->archMmu, bssEndBoundary, SYS_MEM_BASE + bssEndBoundary - virtAddr, kmallocLength >> MMU_DESCRIPTOR_L2_SMALL_SHIFT, flags); if (status != (kmallocLength >> MMU_DESCRIPTOR_L2_SMALL_SHIFT)) { VM_ERR("mmap failed, status: %d", status); return; } LOS_VmSpaceReserve(kSpace, kmallocLength, bssEndBoundary); }
2.3 function OsKSectionNewAttrEnable
The function OsKSectionNewAttrEnable releases the temporary TTB. The code doesn't understand TODO. Take your time later. (1) obtain the kernel virtual process space at (2), set the virtual address translation table base address TTB of the process space MMU, and set the physical memory address translation table base address at (2). (3) read the TTB address from CP15 C2 register and take the upper 20 bits. (4) write the base address of the kernel page table (what is the logical and TODO) into the CP15 c2 TTB register. (5) clear the TLB buffer at and then free the memory.
STATIC VOID OsKSectionNewAttrEnable(VOID) { ⑴ LosVmSpace *kSpace = LOS_GetKVmSpace(); paddr_t oldTtPhyBase; ⑵ kSpace->archMmu.virtTtb = (PTE_T *)g_firstPageTable; kSpace->archMmu.physTtb = LOS_PaddrQuery(kSpace->archMmu.virtTtb); /* we need free tmp ttbase */ ⑶ oldTtPhyBase = OsArmReadTtbr0(); oldTtPhyBase = oldTtPhyBase & MMU_DESCRIPTOR_L2_SMALL_FRAME; ⑷ OsArmWriteTtbr0(kSpace->archMmu.physTtb | MMU_TTBRx_FLAGS); ISB; /* we changed page table entry, so we need to clean TLB here */ ⑸ OsCleanTLB(); (VOID)LOS_MemFree(m_aucSysMem0, (VOID *)(UINTPTR)(oldTtPhyBase - SYS_MEM_BASE + KERNEL_VMM_BASE)); }
3. Virtual real mapping function LOS_ArchMmuMap
Knowledge point TODO of virtual real mapping.
3.1 function LOS_ArchMmuMap
Function LOS_ArchMmuMap is used to map the process space virtual address interval and physical address interval, where the input parameter archMmu is the MMU configuration structure, and vaddr and paddr are the starting addresses of virtual memory and physical memory respectively; Count is the number of virtual address and physical address mappings; flags is the mapping label. (1) the function parameter verification is performed at. The NON-SECURE flag is not supported. The virtual address and physical address need to be aligned with the memory page 4KiB. (2) when the virtual address and physical address are aligned based on 1MiB and the count is greater than 256, the Section page entry format is used. (3) generate and save the L1 section type page table item. The function is analyzed in detail below. If the condition at (2) is not met, L2 mapping needs to be used. First, execute (4) to obtain the L1 page table item corresponding to the virtual address, and then execute (5) to judge whether it is mapped. If there is no corresponding mapping, execute the function OsMapL1PTE at (6) to generate and save the L1 page table item, and then execute the function osmapl2pagecontinuous to generate and save the L2 page table item. If it has been mapped to L1 page table page item type, remap it. If it is not a supported page table item type, execute LOS_Panic() triggers an exception. (7) count the debugging of generating mapping, and finally return the number of successful mapping.
status_t LOS_ArchMmuMap(LosArchMmu *archMmu, VADDR_T vaddr, PADDR_T paddr, size_t count, UINT32 flags) { PTE_T l1Entry; UINT32 saveCounts = 0; INT32 mapped = 0; INT32 checkRst; ⑴ checkRst = OsMapParamCheck(flags, vaddr, paddr); if (checkRst < 0) { return checkRst; } /* see what kind of mapping we can use */ while (count > 0) { ⑵ if (MMU_DESCRIPTOR_IS_L1_SIZE_ALIGNED(vaddr) && MMU_DESCRIPTOR_IS_L1_SIZE_ALIGNED(paddr) && count >= MMU_DESCRIPTOR_L2_NUMBERS_PER_L1) { /* compute the arch flags for L1 sections cache, r ,w ,x, domain and type */ ⑶ saveCounts = OsMapSection(archMmu, flags, &vaddr, &paddr, &count); } else { /* have to use a L2 mapping, we only allocate 4KB for L1, support 0 ~ 1GB */ ⑷ l1Entry = OsGetPte1(archMmu->virtTtb, vaddr); ⑸ if (OsIsPte1Invalid(l1Entry)) { ⑹ OsMapL1PTE(archMmu, &l1Entry, vaddr, flags); saveCounts = OsMapL2PageContinous(l1Entry, flags, &vaddr, &paddr, &count); } else if (OsIsPte1PageTable(l1Entry)) { saveCounts = OsMapL2PageContinous(l1Entry, flags, &vaddr, &paddr, &count); } else { LOS_Panic("%s %d, unimplemented tt_entry %x/n", __FUNCTION__, __LINE__, l1Entry); } } ⑺ mapped += saveCounts; } return mapped; }
3.2 function OsMapSection
The OsMapSection function generates page table entries of L1 section type and saves them. (1) convert to MMU tag at. (2) the inline function osgetpte1ptr (archmmu - > virttb, * vaddr) at is used to obtain the page entry index address corresponding to the virtual address, which is equal to the page entry base address plus the upper 20 bits of the virtual address; OsTruncPte1(*paddr) | mmuFlags | MMU_DESCRIPTOR_L1_TYPE_SECTION) is the upper 12 bits of the virtual address + MMU label + page table item section type value. The function of this line statement is to map the virtual address and physical geography, and maintain the mapping relationship in the page table item. (3) increase the virtual address and physical address by 1MiB and subtract 256 from the number of mappings.
STATIC UINT32 OsMapSection(const LosArchMmu *archMmu, UINT32 flags, VADDR_T *vaddr, PADDR_T *paddr, UINT32 *count) { UINT32 mmuFlags = 0; ⑴ mmuFlags |= OsCvtSecFlagsToAttrs(flags); ⑵ OsSavePte1(OsGetPte1Ptr(archMmu->virtTtb, *vaddr), OsTruncPte1(*paddr) | mmuFlags | MMU_DESCRIPTOR_L1_TYPE_SECTION); ⑶ *count -= MMU_DESCRIPTOR_L2_NUMBERS_PER_L1; *vaddr += MMU_DESCRIPTOR_L1_SMALL_SIZE; *paddr += MMU_DESCRIPTOR_L1_SMALL_SIZE; return MMU_DESCRIPTOR_L2_NUMBERS_PER_L1; }
3.3 function OsGetL2Table
The function OsGetL2Table is used to generate L2 page table. In the function parameters, archMmu is MMU, l1Index is L1 page table item, ppa is output parameter, and the base address of L2 page table item is saved. (1) calculate the L2 page table item offset value at (why do you not understand TODO through such calculation). (2) query and traverse whether there is an L2 page table, and (3) obtain the base address of the page table item, and then judge whether it is a page table type. If so, return the base address of the L2 page table item.
If there is no page table, apply for memory for L2 page table. If virtual address is supported, execute (4) use LOS_ Physpagelloc application memory page; If the virtual address is not supported, execute (5) to use LOS_MemAlloc requests memory. (6) convert to physical address, and then return the base address of L2 page table entry.
STATIC STATUS_T OsGetL2Table(LosArchMmu *archMmu, UINT32 l1Index, paddr_t *ppa) { UINT32 index; PTE_T ttEntry; VADDR_T *kvaddr = NULL; ⑴ UINT32 l2Offset = (MMU_DESCRIPTOR_L2_SMALL_SIZE / MMU_DESCRIPTOR_L1_SMALL_L2_TABLES_PER_PAGE) * (l1Index & (MMU_DESCRIPTOR_L1_SMALL_L2_TABLES_PER_PAGE - 1)); /* lookup an existing l2 page table */ ⑵ for (index = 0; index < MMU_DESCRIPTOR_L1_SMALL_L2_TABLES_PER_PAGE; index++) { ⑶ ttEntry = archMmu->virtTtb[ROUNDDOWN(l1Index, MMU_DESCRIPTOR_L1_SMALL_L2_TABLES_PER_PAGE) + index]; if ((ttEntry & MMU_DESCRIPTOR_L1_TYPE_MASK) == MMU_DESCRIPTOR_L1_TYPE_PAGE_TABLE) { *ppa = (PADDR_T)ROUNDDOWN(MMU_DESCRIPTOR_L1_PAGE_TABLE_ADDR(ttEntry), MMU_DESCRIPTOR_L2_SMALL_SIZE) + l2Offset; return LOS_OK; } } #ifdef LOSCFG_KERNEL_VM /* not found: allocate one (paddr) */ ⑷ LosVmPage *vmPage = LOS_PhysPageAlloc(); if (vmPage == NULL) { VM_ERR("have no memory to save l2 page"); return LOS_ERRNO_VM_NO_MEMORY; } LOS_ListAdd(&archMmu->ptList, &vmPage->node); kvaddr = OsVmPageToVaddr(vmPage); #else ⑸ kvaddr = LOS_MemAlloc(OS_SYS_MEM_ADDR, MMU_DESCRIPTOR_L2_SMALL_SIZE); if (kvaddr == NULL) { VM_ERR("have no memory to save l2 page"); return LOS_ERRNO_VM_NO_MEMORY; } #endif (VOID)memset_s(kvaddr, MMU_DESCRIPTOR_L2_SMALL_SIZE, 0, MMU_DESCRIPTOR_L2_SMALL_SIZE); /* get physical address */ ⑹ *ppa = LOS_PaddrQuery(kvaddr) + l2Offset; return LOS_OK; }
3.4 function OsMapL1PTE
The function OsMapL1PTE is used to generate and save page table entries of L1 page table type, where the function parameter pte1Ptr is the base address of L1 page table entries. (1) obtain the base address of L2 page entry, and (2) assign the base address of L2 page entry plus the descriptor type to the base address of L1 page entry. (3) set the label, and (4) save the base address of the page table item.
STATIC VOID OsMapL1PTE(LosArchMmu *archMmu, PTE_T *pte1Ptr, vaddr_t vaddr, UINT32 flags) { paddr_t pte2Base = 0; ⑴ if (OsGetL2Table(archMmu, OsGetPte1Index(vaddr), &pte2Base) != LOS_OK) { LOS_Panic("%s %d, failed to allocate pagetable\n", __FUNCTION__, __LINE__); } ⑵ *pte1Ptr = pte2Base | MMU_DESCRIPTOR_L1_TYPE_PAGE_TABLE; ⑶ if (flags & VM_MAP_REGION_FLAG_NS) { *pte1Ptr |= MMU_DESCRIPTOR_L1_PAGETABLE_NON_SECURE; } *pte1Ptr &= MMU_DESCRIPTOR_L1_SMALL_DOMAIN_MASK; *pte1Ptr |= MMU_DESCRIPTOR_L1_SMALL_DOMAIN_CLIENT; // use client AP ⑷ OsSavePte1(OsGetPte1Ptr(archMmu->virtTtb, vaddr), *pte1Ptr); }
4. Virtual real mapping query function LOS_ArchMmuQuery
4.1 function LOS_ArchMmuQuery
Function LOS_ArchMmuQuery is used to obtain the physical address and mapping attributes corresponding to the virtual address in the process space. The input parameters are virtual address vaddr, and the output parameters are physical address * paddr and label * flags. (1) obtain the page table item corresponding to the virtual address at. (2) if the page entry descriptor type corresponding to the virtual address is invalid, the error code is returned. (3) if the page table entry descriptor type is Section, execute (4) to obtain the mapped physical address, where MMU_DESCRIPTOR_L1_SECTION_ADDR(l1Entry) is the upper 12 bits of the page table entry, (vaddr & (mmu_descriptor_l1_small_size - 1)) is the lower 20 bits of the virtual address, that is, the intra page offset value. (5) get the mapped tag value at.
If the Page Table entry descriptor type corresponding to the virtual address is Page Table page table, execute (6) call the inline function OsGetPte2BasePtr() to calculate the base address of L2 Page Table entry. The calculation method is: take the high 22 bits and low 10 positions 0 of the Page Table entry and convert it into a virtual address. (7) calculate the L2 page entry value corresponding to the virtual address at. If the L2 page entry descriptor type is a small page, execute (8) to calculate the physical address, and then calculate the corresponding label value. (9) indicates that the current light kernel does not support large page types.
STATUS_T LOS_ArchMmuQuery(const LosArchMmu *archMmu, VADDR_T vaddr, PADDR_T *paddr, UINT32 *flags) { ⑴ PTE_T l1Entry = OsGetPte1(archMmu->virtTtb, vaddr); PTE_T l2Entry; PTE_T* l2Base = NULL; ⑵ if (OsIsPte1Invalid(l1Entry)) { return LOS_ERRNO_VM_NOT_FOUND; ⑶ } else if (OsIsPte1Section(l1Entry)) { if (paddr != NULL) { ⑷ *paddr = MMU_DESCRIPTOR_L1_SECTION_ADDR(l1Entry) + (vaddr & (MMU_DESCRIPTOR_L1_SMALL_SIZE - 1)); } if (flags != NULL) { ⑸ OsCvtSecAttsToFlags(l1Entry, flags); } } else if (OsIsPte1PageTable(l1Entry)) { ⑹ l2Base = OsGetPte2BasePtr(l1Entry); if (l2Base == NULL) { return LOS_ERRNO_VM_NOT_FOUND; } ⑺ l2Entry = OsGetPte2(l2Base, vaddr); if (OsIsPte2SmallPage(l2Entry) || OsIsPte2SmallPageXN(l2Entry)) { if (paddr != NULL) { ⑻ *paddr = MMU_DESCRIPTOR_L2_SMALL_PAGE_ADDR(l2Entry) + (vaddr & (MMU_DESCRIPTOR_L2_SMALL_SIZE - 1)); } if (flags != NULL) { OsCvtPte2AttsToFlags(l1Entry, l2Entry, flags); } ⑼ } else if (OsIsPte2LargePage(l2Entry)) { LOS_Panic("%s %d, large page unimplemented\n", __FUNCTION__, __LINE__); } else { return LOS_ERRNO_VM_NOT_FOUND; } } return LOS_OK; }
5. Virtual real mapping cancellation function LOS_ArchMmuUnmap
Virtual real mapping cancellation function LOS_ArchMmuUnmap releases the mapping relationship between the virtual address interval and the physical address interval in the process space. (1) the function OsGetPte1 at is used to obtain the L1 page entry address corresponding to the specified virtual address. (2) calculate the number of invalid mappings to be released. If the Page Table descriptor mapping type is Section and the number of mappings exceeds 256, execute (3) unmapping Section. If the Page Table descriptor mapping type is Page Table, execute (4) first release the secondary Page Table mapping, and then release the primary Page Table mapping. The two functions involved will be analyzed in detail later. (6) the function at causes the TLB to fail, involving some cp15 registers and assembly, which will be analyzed later.
STATUS_T LOS_ArchMmuUnmap(LosArchMmu *archMmu, VADDR_T vaddr, size_t count) { PTE_T l1Entry; INT32 unmapped = 0; UINT32 unmapCount = 0; while (count > 0) { ⑴ l1Entry = OsGetPte1(archMmu->virtTtb, vaddr); if (OsIsPte1Invalid(l1Entry)) { ⑵ unmapCount = OsUnmapL1Invalid(&vaddr, &count); } else if (OsIsPte1Section(l1Entry)) { if (MMU_DESCRIPTOR_IS_L1_SIZE_ALIGNED(vaddr) && count >= MMU_DESCRIPTOR_L2_NUMBERS_PER_L1) { ⑶ unmapCount = OsUnmapSection(archMmu, &vaddr, &count); } else { LOS_Panic("%s %d, unimplemented\n", __FUNCTION__, __LINE__); } } else if (OsIsPte1PageTable(l1Entry)) { ⑷ unmapCount = OsUnmapL2PTE(archMmu, vaddr, &count); OsTryUnmapL1PTE(archMmu, vaddr, OsGetPte2Index(vaddr) + unmapCount, MMU_DESCRIPTOR_L2_NUMBERS_PER_L1 - unmapCount); ⑸ vaddr += unmapCount << MMU_DESCRIPTOR_L2_SMALL_SHIFT; } else { LOS_Panic("%s %d, unimplemented\n", __FUNCTION__, __LINE__); } unmapped += unmapCount; } ⑹ OsArmInvalidateTlbBarrier(); return unmapped; }
5.1 function OsUnmapL1Invalid
The function OsUnmapL1Invalid is used to remove invalid mappings. It will increase the virtual address and reduce the number of mappings. (1) MMU at_ DESCRIPTOR_ L1_ SMALL_ Size indicates 1MiB size, * vaddr% MMU_DESCRIPTOR_L1_SMALL_SIZE takes the remainder of 1MiB and shifts 12 bits to the right > > MMU_ DESCRIPTOR_ L2_ SMALL_ Shift indicates that the size is converted to the number of memory pages. (why subtract TODO?). (2) shift the number of unmapped memory pages to the left by 12 bits, convert it to the address length, and then update the virtual address. (3) subtract the unmapped quantity at.
STATIC INLINE UINT32 OsUnmapL1Invalid(vaddr_t *vaddr, UINT32 *count) { UINT32 unmapCount; ⑴ unmapCount = MIN2((MMU_DESCRIPTOR_L1_SMALL_SIZE - (*vaddr % MMU_DESCRIPTOR_L1_SMALL_SIZE)) >> MMU_DESCRIPTOR_L2_SMALL_SHIFT, *count); ⑵ *vaddr += unmapCount << MMU_DESCRIPTOR_L2_SMALL_SHIFT; ⑶ *count -= unmapCount; return unmapCount; }
5.2 function OsUnmapSection
The function OsUnmapSection is used to contact the Section mapping of the first level page table. (1) set the base address of the page table item corresponding to the virtual address to 0. (2) disable the TLB register at (3) update the virtual address and the number of mappings.
STATIC UINT32 OsUnmapSection(LosArchMmu *archMmu, vaddr_t *vaddr, UINT32 *count) { ⑴ OsClearPte1(OsGetPte1Ptr((PTE_T *)archMmu->virtTtb, *vaddr)); ⑵ OsArmInvalidateTlbMvaNoBarrier(*vaddr); ⑶ *vaddr += MMU_DESCRIPTOR_L1_SMALL_SIZE; *count -= MMU_DESCRIPTOR_L2_NUMBERS_PER_L1; return MMU_DESCRIPTOR_L2_NUMBERS_PER_L1; }
5.3 function OsUnmapL2PTE
The function OsUnmapL2PTE is used for. First, call the function OsGetPte1 to calculate the virtual address corresponding page table item, then call the function OsGetPte2BasePtr to calculate the two level page table base address. (2) obtain the secondary page entry index of the virtual address at. (3) calculate the quantity to be unmapped (why take the minimum TODO). (4) release the mapping of each secondary page table in turn. (5) failure of TLB.
STATIC UINT32 OsUnmapL2PTE(const LosArchMmu *archMmu, vaddr_t vaddr, UINT32 *count) { UINT32 unmapCount; UINT32 pte2Index; PTE_T *pte2BasePtr = NULL; ⑴ pte2BasePtr = OsGetPte2BasePtr(OsGetPte1((PTE_T *)archMmu->virtTtb, vaddr)); if (pte2BasePtr == NULL) { LOS_Panic("%s %d, pte2 base ptr is NULL\n", __FUNCTION__, __LINE__); } ⑵ pte2Index = OsGetPte2Index(vaddr); ⑶ unmapCount = MIN2(MMU_DESCRIPTOR_L2_NUMBERS_PER_L1 - pte2Index, *count); /* unmap page run */ ⑷ OsClearPte2Continuous(&pte2BasePtr[pte2Index], unmapCount); /* invalidate tlb */ ⑸ OsArmInvalidateTlbMvaRangeNoBarrier(vaddr, unmapCount); *count -= unmapCount; return unmapCount; }
6 other functions
6.1 mapping attribute modification function LOS_ArchMmuChangeProt
Function LOS_ArchMmuChangeProt is used to modify the mapping attribute of the process space virtual address interval, where the parameter archMmu is the MMU information of the process space, vaddr is the virtual address, count is the number of mapped pages, and flags is the new label attribute information used by the mapping. (1) check the parameters at; (2) query the physical address of the virtual address mapping. If there is no mapping, execute (3) increase the virtual address by one memory page size and continue to modify the properties of the next memory page. (4) first unmap the current memory page, and then execute (5) remap with the new mapping attribute, and (6) increase the size of one memory page by the virtual address, and continue to modify the attributes of the next memory page.
STATUS_T LOS_ArchMmuChangeProt(LosArchMmu *archMmu, VADDR_T vaddr, size_t count, UINT32 flags) { STATUS_T status; PADDR_T paddr = 0; ⑴ if ((archMmu == NULL) || (vaddr == 0) || (count == 0)) { VM_ERR("invalid args: archMmu %p, vaddr %p, count %d", archMmu, vaddr, count); return LOS_NOK; } while (count > 0) { ⑵ count--; status = LOS_ArchMmuQuery(archMmu, vaddr, &paddr, NULL); if (status != LOS_OK) { ⑶ vaddr += MMU_DESCRIPTOR_L2_SMALL_SIZE; continue; } ⑷ status = LOS_ArchMmuUnmap(archMmu, vaddr, 1); if (status < 0) { VM_ERR("invalid args:aspace %p, vaddr %p, count %d", archMmu, vaddr, count); return LOS_NOK; } ⑸ status = LOS_ArchMmuMap(archMmu, vaddr, paddr, 1, flags); if (status < 0) { VM_ERR("invalid args:aspace %p, vaddr %p, count %d", archMmu, vaddr, count); return LOS_NOK; } ⑹ vaddr += MMU_DESCRIPTOR_L2_SMALL_SIZE; } return LOS_OK; }
6.2 mapping transfer function LOS_ArchMmuMove
Function LOS_ArchMmuMove is used to transfer the mapping relationship of a virtual address interval in the process space to another unused virtual address interval for remapping. The parameter oldVaddr is the old virtual address, newVaddr is the new virtual memory address, and flags can change the mapping attribute information during remapping. (1) first query the physical memory of the old virtual address mapping. If there is no mapping relationship, increase the size of the old and new virtual memory by one memory page, cancel the mapping of the old virtual address at (2), and remap the new virtual memory to the queried physical memory address at (3). (4) increase the size of one memory page for both old and new virtual memory, and continue to process the next memory page.
STATUS_T LOS_ArchMmuMove(LosArchMmu *archMmu, VADDR_T oldVaddr, VADDR_T newVaddr, size_t count, UINT32 flags) { STATUS_T status; PADDR_T paddr = 0; if ((archMmu == NULL) || (oldVaddr == 0) || (newVaddr == 0) || (count == 0)) { VM_ERR("invalid args: archMmu %p, oldVaddr %p, newVddr %p, count %d", archMmu, oldVaddr, newVaddr, count); return LOS_NOK; } while (count > 0) { count--; ⑴ status = LOS_ArchMmuQuery(archMmu, oldVaddr, &paddr, NULL); if (status != LOS_OK) { oldVaddr += MMU_DESCRIPTOR_L2_SMALL_SIZE; newVaddr += MMU_DESCRIPTOR_L2_SMALL_SIZE; continue; } // we need to clear the mapping here and remain the phy page. ⑵ status = LOS_ArchMmuUnmap(archMmu, oldVaddr, 1); if (status < 0) { VM_ERR("invalid args: archMmu %p, vaddr %p, count %d", archMmu, oldVaddr, count); return LOS_NOK; } ⑶ status = LOS_ArchMmuMap(archMmu, newVaddr, paddr, 1, flags); if (status < 0) { VM_ERR("invalid args:archMmu %p, old_vaddr %p, new_addr %p, count %d", archMmu, oldVaddr, newVaddr, count); return LOS_NOK; } ⑷ oldVaddr += MMU_DESCRIPTOR_L2_SMALL_SIZE; newVaddr += MMU_DESCRIPTOR_L2_SMALL_SIZE; } return LOS_OK; }
Summary
This paper introduces the basic concept and operation mechanism of MMU virtual real mapping, and analyzes the code of common interfaces such as mapping initialization, mapping query, mapping virtual memory and physical memory, releasing virtual real mapping, changing mapping attributes, remapping and so on. Thank you for reading. If you have any questions, please leave a message.
Click focus to learn about Huawei cloud's new technologies for the first time~