Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\ksm.c Create Date:2022-07-28 15:40:38
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:write_protect_page

Proto:static int write_protect_page(struct vm_area_struct *vma, struct page *page, pte_t *orig_pte)

Type:int

Parameter:

TypeParameterName
struct vm_area_struct *vma
struct page *page
pte_t *orig_pte
1035  mm = The address space we belong to.
1036  struct page_vma_mapped_walk pvmw = {page = page, vma = vma, }
1041  err = -EFAULT
1044  address = At what user virtual address is page expected in vma?* Caller should check the page is actually part of the vma.
1045  If address == -EFAULT Then Go to out
1048  BUG_ON(PageTransCompound returns true for both transparent huge pages* and hugetlbfs pages, so it should only be called when it's known* that hugetlbfs pages aren't involved.)
1050  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, mm, address, address + PAGE_SIZE)
1053  mmu_notifier_invalidate_range_start( & range)
1055  If Not page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma Then Go to out_mn
1057  If WARN_ONCE(!pte, "Unexpected PMD mapping?") Then Go to out_unlock
1060  If pte_write( * pte) || The following only work if pte_present() is true.* Undefined behaviour if not.. || Technically a PTE can be PROTNONE even when not doing NUMA balancing but* the only case the kernel cares is for NUMA balancing and is only ever set* when the VMA is accessible. For PROT_NONE VMAs, the PTEs are not marked && pte_savedwrite( * pte) || mm_tlb_flush_pending(mm) Then
1065  swapped = PageSwapCache(page)
1066  flush_cache_page(vma, address, page_to_pfn(page))
1081  entry = ptep_clear_flush(vma, address, pte)
1086  If page_mapcount(page) + 1 + swapped != page_count(page) Then
1088  Go to out_unlock
1090  If The following only work if pte_present() is true.* Undefined behaviour if not.. Then Dirty a page
1093  If Technically a PTE can be PROTNONE even when not doing NUMA balancing but* the only case the kernel cares is for NUMA balancing and is only ever set* when the VMA is accessible. For PROT_NONE VMAs, the PTEs are not marked Then entry = pte_mkclean(pte_clear_savedwrite(entry))
1095  Else entry = pte_mkclean(pte_wrprotect(entry))
1097  set_pte_at_notify() sets the pte _after_ running the notifier.* This is safe to start by updating the secondary MMUs, because the primary MMU* pte invalidate must have already happened with a ptep_clear_flush() before* set_pte_at_notify() has been invoked(mm, address, pte, entry)
1099  orig_pte = pte
1100  err = 0
1102  out_unlock :
1103  page_vma_mapped_walk_done( & pvmw)
1104  out_mn :
1105  mmu_notifier_invalidate_range_end( & range)
1106  out :
1107  Return err
Caller
NameDescribe
try_to_merge_one_pagery_to_merge_one_page - take two pages and merge them into one*@vma: the vma that holds the pte pointing to page*@page: the PageAnon page that we want to replace with kpage*@kpage: the PageKsm page that we want to map instead of page,