Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\rmap.c Create Date:2022-07-28 14:56:35
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page

Proto:void page_add_new_anon_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address, bool compound)

Type:void

Parameter:

TypeParameterName
struct page *page
struct vm_area_struct *vma
unsigned longaddress
boolcompound
1173  nr = If compound Then hpage_nr_pages(page) Else 1
1175  VM_BUG_ON_VMA(address < Our start address within vm_mm. || address >= The first byte after our end addresswithin vm_mm. , vma)
1176  __SetPageSwapBacked(page)
1177  If compound Then
1178  VM_BUG_ON_PAGE(!PageHuge() only returns true for hugetlbfs pages, but not for* normal or transparent huge pages.* PageTransHuge() returns true for both transparent huge and* hugetlbfs pages, but not normal pages. PageTransHuge() can only be, page)
1180  atomic_set(compound_mapcount_ptr(page), 0)
1181  __inc_node_page_state(page, NR_ANON_THPS)
1182  Else
1184  VM_BUG_ON_PAGE(PageTransCompound returns true for both transparent huge pages* and hugetlbfs pages, so it should only be called when it's known* that hugetlbfs pages aren't involved., page)
1186  atomic_set( & * If the page can be mapped to userspace, encodes the number * of times this page is referenced by a page table., 0)
1188  __mod_node_page_state(page_pgdat(page), Mapped anonymous pages , nr)
1189  __page_set_anon_rmap - set up new anonymous rmap*@page: Page or Hugepage to add to rmap*@vma: VM area to add page to.*@address: User virtual address of the mapping *@exclusive: the page is exclusively owned by the current process
Caller
NameDescribe
do_swap_pageWe enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases
do_anonymous_pageWe enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked.
alloc_set_ptealloc_set_pte - setup new PTE entry for given page and add reverse page* mapping
unuse_pteNo need to decide whether this PTE shares the swap entry with others,* just let do_wp_page work it out if a write is requested later - to* force COW, vm_page_prot omits write permission from any private vma.
__do_huge_pmd_anonymous_page
do_huge_pmd_wp_page_fallback
do_huge_pmd_wp_page
collapse_huge_page
mcopy_atomic_pte