Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\gup.c Create Date:2022-07-28 14:34:21
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:follow_page_pte

Proto:static struct page *follow_page_pte(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, unsigned int flags, struct dev_pagemap **pgmap)

Type:struct page

Parameter:

TypeParameterName
struct vm_area_struct *vma
unsigned longaddress
pmd_t *pmd
unsigned intflags
struct dev_pagemap **pgmap
177  mm = The address space we belong to.
182  retry :
183  If Value for the false possibility is greater at compile time(pmd_bad( * pmd)) Then Return no_page_table(vma, flags)
186  ptep = pte_offset_map_lock(mm, pmd, address, & ptl)
187  pte = ptep
188  If Not pte_present(pte) Then
195  If Value is more likely to compile time(!(flags & wait for page to replace migration entry )) Then Go to no_page
197  If pte_none(pte) Then Go to no_page
199  entry = Convert the arch-dependent pte representation of a swp_entry_t into an* arch-independent swp_entry_t.
200  If Not is_migration_entry(entry) Then Go to no_page
202  pte_unmap_unlock(ptep, ptl)
203  migration_entry_wait(mm, pmd, address)
204  Go to retry
206  If flags & rce NUMA hinting page fault && Technically a PTE can be PROTNONE even when not doing NUMA balancing but* the only case the kernel cares is for NUMA balancing and is only ever set* when the VMA is accessible. For PROT_NONE VMAs, the PTEs are not marked Then Go to no_page
208  If flags & check pte is writable && Not FOLL_FORCE can write to even unwritable pte's, but only* after we've gone through a COW cycle and they are dirty. Then
209  pte_unmap_unlock(ptep, ptl)
210  Return NULL
213  page = vm_normal_page(vma, address, pte)
214  If Not page && pte_devmap(pte) && flags & do get_page on page Then
219  pgmap = get_dev_pagemap(pte_pfn(pte), * pgmap)
220  If pgmap Then page = pte_page(pte)
222  Else Go to no_page
224  Else if Value for the false possibility is greater at compile time(!page) Then
227  page = ERR_PTR( - EFAULT)
228  Go to out
231  If is_zero_pfn(pte_pfn(pte)) Then
232  page = pte_page(pte)
233  Else
237  page = ERR_PTR(ret)
238  Go to out
242  If flags & don't return transhuge pages, split them && PageTransCompound returns true for both transparent huge pages* and hugetlbfs pages, so it should only be called when it's known* that hugetlbfs pages aren't involved. Then
244  get_page(page)
245  pte_unmap_unlock(ptep, ptl)
246  lock_page may only be called if we have the page's inode pinned.
247  ret = split_huge_page(page)
248  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
249  put_page(page)
250  If ret Then Return ERR_PTR(ret)
252  Go to retry
255  If flags & do get_page on page Then
257  page = ERR_PTR( - ENOMEM)
258  Go to out
261  If flags & mark page accessed Then
262  If flags & check pte is writable && Not The following only work if pte_present() is true.* Undefined behaviour if not.. && Not PageDirty(page) Then Dirty a page
270  Mark a page as having seen activity.* inactive,unreferenced -> inactive,referenced* inactive,referenced -> active,unreferenced* active,unreferenced -> active,referenced* When a newly allocated page is not yet visible, so safe for non-atomic ops,
272  If flags & lock present pages && Flags, see mm.h. & VM_LOCKED Then
274  If PageTransCompound returns true for both transparent huge pages* and hugetlbfs pages, so it should only be called when it's known* that hugetlbfs pages aren't involved. Then Go to out
287  lru_add_drain()
298  out :
299  pte_unmap_unlock(ptep, ptl)
300  Return page
301  no_page :
302  pte_unmap_unlock(ptep, ptl)
303  If Not pte_none(pte) Then Return NULL
305  Return no_page_table(vma, flags)
Caller
NameDescribe
follow_pmd_mask