Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\khugepaged.c Create Date:2022-07-28 16:06:37
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Bring missing pages in from swap, to complete THP collapse.* Only done if khugepaged_scan_pmd believes it is worthwhile.* Called and returns without pte mapped or spinlocks held,* but with mmap_sem held to protect against vma changes.

Proto:static bool __collapse_huge_page_swapin(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, int referenced)

Type:bool

Parameter:

TypeParameterName
struct mm_struct *mm
struct vm_area_struct *vma
unsigned longaddress
pmd_t *pmd
intreferenced
895  swapped_in = 0
896  ret = 0
897  struct vm_fault vmf = {Target VMA = vma, Faulting virtual address = address, FAULT_FLAG_xxx flags = Retry fault if blocking , Pointer to pmd entry matching* the 'address' = pmd, Logical page offset based on vma = linear_page_index(vma, address), }
906  If referenced < HPAGE_PMD_NR / 2 Then
907  trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0)
908  Return false
910  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map(pmd, address)
911  When Faulting virtual address < address + HPAGE_PMD_NR * PAGE_SIZE cycle
913  Value of PTE at the time of fault = Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.
914  If Not heck whether a pte points to a swap entry Then Continue
916  swapped_in++
917  ret = We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases
920  If ret & VM_FAULT_RETRY Then
928  If mm_find_pmd(mm, address) != pmd Then
933  If ret & VM_FAULT_ERROR Then
935  Return false
938  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map(pmd, Faulting virtual address )
940  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.--
941  pte_unmap(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.)
942  trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1)
943  Return true
Caller
NameDescribe
collapse_huge_page