Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\rmap.c Create Date:2022-07-28 14:57:00
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:@arg: enum ttu_flags will be passed to this argument

Proto:static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, unsigned long address, void *arg)

Type:bool

Parameter:

TypeParameterName
struct page *page
struct vm_area_struct *vma
unsigned longaddress
void *arg
1369  mm = The address space we belong to.
1370  struct page_vma_mapped_walk pvmw = {page = page, vma = vma, address = address, }
1377  bool ret = true
1379  flags = arg
1382  If flags & munlock mode && Not (Flags, see mm.h. & VM_LOCKED) Then Return true
1385  If IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm',* 0 otherwise.(CONFIG_MIGRATION) && flags & migration mode && is_zone_device_page(page) && Not is_device_private_page(page) Then Return true
1389  If flags & split huge PMD if any Then
1390  split_huge_pmd_address(vma, address, flags & freeze pte under splitting thp , page)
1402  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, The address space we belong to. , address, min - return minimum of two values of the same or compatible types*@x: first value*@y: second value(The first byte after our end addresswithin vm_mm. , address + Returns the number of bytes in this potentially compound page. ))
1405  If PageHuge(page) Then
1410  adjust_range_if_pmd_sharing_possible(vma, & start, & end)
1413  mmu_notifier_invalidate_range_start( & range)
1415  When page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma cycle
1431  If Not (flags & ignore mlock ) Then
1432  If Flags, see mm.h. & VM_LOCKED Then
1445  If flags & munlock mode Then Continue
1450  VM_BUG_ON_PAGE(!pte, page)
1452  subpage = page - page_to_pfn(page) + pte_pfn( * pte)
1453  address = address
1455  If PageHuge(page) Then
1456  If huge_pmd_unshare(mm, & address, pte) Then
1512  subpage = page
1513  Go to discard
1516  If Not (flags & don't age ) Then
1519  ret = false
1521  Break
1526  flush_cache_page(vma, address, pte_pfn( * pte))
1527  If should_defer_flush(mm, flags) Then
1539  Else
1544  If The following only work if pte_present() is true.* Undefined behaviour if not.. Then Dirty a page
1548  update_hiwater_rss(mm)
1552  If PageHuge(page) Then
1557  Else
1584  ret = false
1586  Break
1604  Else if PageAnon(page) Then
1612  WARN_ON_ONCE(1)
1613  ret = false
1618  Break
1622  If Not PageSwapBacked(page) Then
1623  If Not PageDirty(page) Then
1637  ret = false
1639  Break
1642  If swap_duplicate(entry) < 0 Then
1644  ret = false
1646  Break
1650  ret = false
1652  Break
1669  Else
1682  discard :
1690  page_remove_rmap - take down pte mapping from a page*@page: page to remove mapping from*@compound: uncharge the page as compound or small page* The caller needs to hold the pte lock.
1691  put_page(page)
1694  mmu_notifier_invalidate_range_end( & range)
1696  Return ret