Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\ksm.c Create Date:2022-07-28 15:40:43
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:place_page - replace page in vma by new ksm page*@vma: vma that holds the pte pointing to page*@page: the page we are replacing by kpage*@kpage: the ksm page we replace page by*@orig_pte: the original value of the pte

Proto:static int replace_page(struct vm_area_struct *vma, struct page *page, struct page *kpage, pte_t orig_pte)

Type:int

Parameter:

TypeParameterName
struct vm_area_struct *vma
struct page *page
struct page *kpage
pte_torig_pte
1122  mm = The address space we belong to.
1128  err = -EFAULT
1131  addr = At what user virtual address is page expected in vma?* Caller should check the page is actually part of the vma.
1132  If addr == -EFAULT Then Go to out
1135  pmd = mm_find_pmd(mm, addr)
1136  If Not pmd Then Go to out
1139  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, addr + PAGE_SIZE)
1141  mmu_notifier_invalidate_range_start( & range)
1143  ptep = pte_offset_map_lock(mm, pmd, addr, & ptl)
1144  If Not pte_same( * ptep, orig_pte) Then
1145  pte_unmap_unlock(ptep, ptl)
1146  Go to out_mn
1153  If Not is_zero_pfn(page_to_pfn(kpage)) Then
1154  get_page(kpage)
1155  page_add_anon_rmap - add pte mapping to an anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page* The caller
1156  newpte = Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(kpage, Access permissions of this VMA. )
1157  Else
1158  newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage), Access permissions of this VMA. ))
1166  dec_mm_counter(mm, MM_ANONPAGES)
1169  flush_cache_page(vma, addr, pte_pfn( * ptep))
1176  ptep_clear_flush(vma, addr, ptep)
1177  set_pte_at_notify() sets the pte _after_ running the notifier.* This is safe to start by updating the secondary MMUs, because the primary MMU* pte invalidate must have already happened with a ptep_clear_flush() before* set_pte_at_notify() has been invoked(mm, addr, ptep, newpte)
1179  page_remove_rmap - take down pte mapping from a page*@page: page to remove mapping from*@compound: uncharge the page as compound or small page* The caller needs to hold the pte lock.
1180  If Not Return true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped. Then If swap is getting full, or if there are no more mappings of this page,* then try_to_free_swap is called to free its swap space.
1182  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1184  pte_unmap_unlock(ptep, ptl)
1185  err = 0
1186  out_mn :
1187  mmu_notifier_invalidate_range_end( & range)
1188  out :
1189  Return err
Caller
NameDescribe
try_to_merge_one_pagery_to_merge_one_page - take two pages and merge them into one*@vma: the vma that holds the pte pointing to page*@page: the PageAnon page that we want to replace with kpage*@kpage: the PageKsm page that we want to map instead of page,