Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:arch\x86\include\asm\pgtable.h Create Date:2022-07-28 06:00:00
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:pte_pfn

Proto:static inline unsigned long pte_pfn(pte_t pte)

Type:unsigned long

Parameter:

TypeParameterName
pte_tpte
213  pfn = pte_val(pte)
214  pfn ^= Entries that were set to PROT_NONE are inverted
215  Return (pfn & Extracts the PFN from a (pte|pmd|pud|pgd)val_t of a 4KB page ) >> PAGE_SHIFT determines the page size
Caller
NameDescribe
is_crashed_pfn_valid
__replace_page__replace_page - replace page in vma by new page
follow_page_pte
get_gate_page
vm_normal_pagevm_normal_page -- This function gets the "struct page" associated with a pte.* "Special" mappings do not wish to be associated with a "struct page" (either* it doesn't exist, or it exists but they don't want to touch it). In this
insert_pfn
wp_page_reuseHandle write page faults for pages that can be reused in the current vma* This can happen either due to the mapping being with the VM_SHARED flag,* or due to us being the last reference standing to the page
wp_page_copyHandle the case of a page which we actually need to copy to a new page.* Called with mmap_sem locked and the old page referenced, but* without the ptl held.* High level logic flow:* - Allocate a page, copy the content of the old page to the new one.
follow_pfnllow_pfn - look up PFN at a user virtual address*@vma: memory mapping*@address: user virtual address*@pfn: location to store found PFN* Only IO mappings and raw PFN mappings are allowed.* Return: zero and the pfn at @pfn on success, -ve otherwise.
prot_none_pte_entry
prot_none_hugetlb_entry
check_pteheck_pte - check if @pvmw->page is mapped at the @pvmw->pte* page_vma_mapped_walk() found a place where @pvmw->page is *potentially** mapped
page_mkclean_one
try_to_unmap_one@arg: enum ttu_flags will be passed to this argument
vmemmap_verify
replace_pageplace_page - replace page in vma by new ksm page*@vma: vma that holds the pte pointing to page*@page: the page we are replacing by kpage*@kpage: the ksm page we replace page by*@orig_pte: the original value of the pte
release_pte_pages
__collapse_huge_page_isolate
__collapse_huge_page_copy
khugepaged_scan_pmd
hmm_vma_handle_pte
hmm_vma_walk_hugetlb_entry
dax_entry_mkcleanWalk all mappings of a given index of a file and writeprotect them