Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\migrate.c Create Date:2022-07-28 15:58:15
Last Modify:2022-05-20 09:53:13 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Restore a potential migration pte to a working pte entry

Proto:static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *old)

Type:bool

Parameter:

TypeParameterName
struct page *page
struct vm_area_struct *vma
unsigned longaddr
void *old
207  struct page_vma_mapped_walk pvmw = {page = old, vma = vma, address = addr, flags = Avoid racy checks | Look for migarion entries rather than present PTEs , }
217  VM_BUG_ON_PAGE(PageTail(page), page)
218  When page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma cycle
219  If A KSM page is one of those write-protected "shared pages" or "merged pages"* which KSM maps into multiple mms, wherever identical anonymous page content* is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any Then new = page
221  Else new = page - Our offset within mapping. + linear_page_index(vma, address)
234  get_page(new)
235  pte = pte_mkold(Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(new, READ_ONCE(Access permissions of this VMA. )))
236  If pte_swp_soft_dirty( * pte) Then pte = pte_mksoft_dirty(pte)
242  entry = Convert the arch-dependent pte representation of a swp_entry_t into an* arch-independent swp_entry_t.
243  If is_write_migration_entry(entry) Then pte = Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when* servicing faults for write access. In the normal case, do always want* pte_mkwrite. But get_user_pages can cause write faults for mappings
247  If is_device_private_page(new) Then
255  pte = pte_mkhuge(pte)
256  pte = arch_make_huge_pte(pte, vma, new, 0)
260  Else page_dup_rmap(new, true)
262  Else
272  If Flags, see mm.h. & VM_LOCKED && Not PageTransCompound returns true for both transparent huge pages* and hugetlbfs pages, so it should only be called when it's known* that hugetlbfs pages aren't involved. Then Mark page as mlocked if not already.* If page on LRU, isolate and putback to move to unevictable list.
275  If PageHuge() only returns true for hugetlbfs pages, but not for* normal or transparent huge pages.* PageTransHuge() returns true for both transparent huge and* hugetlbfs pages, but not normal pages. PageTransHuge() can only be && PageMlocked(page) Then LRU accounting for clear_page_mlock()
279  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
282  Return true