Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\page_vma_mapped.c Create Date:2022-07-28 14:54:23
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma

Proto:bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)

Type:bool

Parameter:

TypeParameterName
struct page_vma_mapped_walk *pvmw
140  mm = The address space we belong to.
141  page = page
148  If pmd && Not pte Then Return not_found(pvmw)
151  If pte Then Go to next_pte
154  If Value for the false possibility is greater at compile time(PageHuge(page)) Then
156  pte = huge_pte_offset(mm, address, Returns the number of bytes in this potentially compound page. )
157  If Not pte Then Return false
160  ptl = huge_pte_lockptr(page_hstate(page), mm, pte)
161  spin_lock(ptl)
162  If Not heck_pte - check if @pvmw->page is mapped at the @pvmw->pte* page_vma_mapped_walk() found a place where @pvmw->page is *potentially** mapped Then Return not_found(pvmw)
164  Return true
166  restart :
167  pgd = a shortcut to get a pgd_t in a given mm(mm, address)
168  If Not pgd_present( * pgd) Then Return false
170  p4d = p4d_offset(pgd, address)
171  If Not p4d_present( * p4d) Then Return false
173  pud = pud_offset(p4d, address)
174  If Not pud_present( * pud) Then Return false
176  pmd = pmd_offset(pud, address)
182  pmde = READ_ONCE( * pmd)
183  If pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde) Then
184  ptl = pmd_lock(mm, pmd)
190  Return true
191  Else if Not pmd_present( * pmd) Then
192  If thp_migration_supported() Then
196  entry = pmd_to_swp_entry( * pmd)
198  If migration_entry_to_page(entry) != page Then Return not_found(pvmw)
200  Return true
203  Return not_found(pvmw)
204  Else
206  spin_unlock(ptl)
207  ptl = NULL
209  Else if Not pmd_present(pmde) Then
210  Return false
212  If Not map_pte(pvmw) Then Go to next_pte
214  When 1 cycle
215  If heck_pte - check if @pvmw->page is mapped at the @pvmw->pte* page_vma_mapped_walk() found a place where @pvmw->page is *potentially** mapped Then Return true
217  next_pte :
219  If Not PageHuge() only returns true for hugetlbfs pages, but not for* normal or transparent huge pages.* PageTransHuge() returns true for both transparent huge and* hugetlbfs pages, but not normal pages. PageTransHuge() can only be || PageHuge(page) Then Return not_found(pvmw)
221  Do
222  address += PAGE_SIZE
229  If address % PMD_SIZE == 0 Then
230  pte_unmap(pte)
231  If ptl Then
232  spin_unlock(ptl)
233  ptl = NULL
235  Go to restart
236  Else
237  pte++
239  When pte_none( * pte) cycle
241  If Not ptl Then
242  ptl = pte_lockptr(mm, pmd)
243  spin_lock(ptl)
Caller
NameDescribe
page_mapped_in_vmapage_mapped_in_vma - check whether a page is really mapped in a VMA*@page: the page to test*@vma: the VMA to test* Returns 1 if the page is mapped into the page tables of the VMA, 0* if the page is not mapped into the page tables of this VMA. Only
page_referenced_one
page_mkclean_one
try_to_unmap_one@arg: enum ttu_flags will be passed to this argument
write_protect_page
remove_migration_pteRestore a potential migration pte to a working pte entry
page_idle_clear_pte_refs_one
__replace_page__replace_page - replace page in vma by new page