函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\huge_memory.c Create Date:2022-07-27 17:35:54
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:do_huge_pmd_wp_page

函数原型:vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)

返回类型:vm_fault_t

参数:

类型参数名称
struct vm_fault *vmf
pmd_torig_pmd
1317  vma等于Target VMA
1318  page等于NULL
1320  haddr等于Faulting virtual address 按位与HPAGE_PMD_MASK
1323  ret等于0
1325  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.等于pmd_lockptr(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
1326  VM_BUG_ON_VMA(!Serialized by page_table_lock , vma)
1327  如果is_huge_zero_pmd(orig_pmd)则转到:alloc
1329  加自旋锁
1330  如果此条件成立可能性小(为编译器优化)(!pmd_same( * Pointer to pmd entry matching* the 'address' , orig_pmd))则转到:out_unlock
1333  page等于Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:(orig_pmd)
1334  VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page)
1339  如果非Return true if the page was successfully locked
1340  get_page(page)
1341  自旋锁解锁
1342  lock_page may only be called if we have the page's inode pinned.
1343  加自旋锁
1347  转到:out_unlock
1349  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1351  如果We can write to an anon page without COW if there are no other references* to it. And as a side-effect, free up its swap: because the old content* on disk will never be read, and seeking back there to write new content
1353  entry等于pmd_mkyoung(orig_pmd)
1354  entry等于maybe_pmd_mkwrite(pmd_mkdirty(entry), vma)
1355  如果pmdp_set_access_flags(vma, haddr, Pointer to pmd entry matching* the 'address' , entry, 1)则update_mmu_cache_pmd(vma, Faulting virtual address , Pointer to pmd entry matching* the 'address' )
1357  ret或等于VM_FAULT_WRITE
1358  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1359  转到:out_unlock
1361  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
1362  get_page(page)
1363  自旋锁解锁
1364  alloc :
1365  如果 be used on vmas which are known to support THP.* Use transparent_hugepage_enabled otherwise且非transparent_hugepage_debug_cow()则
1367  huge_gfp等于always: directly stall for all thp allocations* defer: wake kswapd and fail if not immediately available* defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise* fail if not immediately available* madvise: directly stall for
1368  new_page等于alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER)
1369  否则new_page = NULL
1372  如果此条件成立可能性大(为编译器优化)(new_page)则
1373  prep_transhuge_page(new_page)
1374  否则
1375  如果非page
1377  ret或等于VM_FAULT_FALLBACK
1378  否则
1380  如果ret按位与VM_FAULT_OOM
1386  Disable counters
1387  转到:out
1390  如果此条件成立可能性小(为编译器优化)(mem_cgroup_try_charge_delay(new_page, The address space we belong to. , huge_gfp, & memcg, true))则
1392  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1393  split_huge_pmd(vma, Pointer to pmd entry matching* the 'address' , Faulting virtual address )
1394  如果pagePerform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1396  ret或等于VM_FAULT_FALLBACK
1397  Disable counters
1398  转到:out
1401  Disable counters
1402  count_memcg_events(memcg, THP_FAULT_ALLOC, 1)
1404  如果非pageclear_huge_page(new_page, Faulting virtual address , HPAGE_PMD_NR)
1406  否则copy_user_huge_page(new_page, page, Faulting virtual address , vma, HPAGE_PMD_NR)
1409  __SetPageUptodate(new_page)
1411  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, The address space we belong to. , haddr, haddr + HPAGE_PMD_SIZE)
1413  mmu_notifier_invalidate_range_start( & range)
1415  加自旋锁
1416  如果pagePerform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1418  如果此条件成立可能性小(为编译器优化)(!pmd_same( * Pointer to pmd entry matching* the 'address' , orig_pmd))则
1419  自旋锁解锁
1420  mem_cgroup_cancel_charge(new_page, memcg, true)
1421  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
1422  转到:out_mn
1423  否则
1425  entry等于mk_huge_pmd(new_page, Access permissions of this VMA. )
1426  entry等于maybe_pmd_mkwrite(pmd_mkdirty(entry), vma)
1427  pmdp_huge_clear_flush_notify(vma, haddr, Pointer to pmd entry matching* the 'address' )
1428  page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page
1429  mem_cgroup_commit_charge(new_page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., true)
1430  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
1431  set_pmd_at(The address space we belong to. , haddr, Pointer to pmd entry matching* the 'address' , entry)
1432  update_mmu_cache_pmd(vma, Faulting virtual address , Pointer to pmd entry matching* the 'address' )
1433  如果非page
1435  否则
1440  ret或等于VM_FAULT_WRITE
1442  自旋锁解锁
1443  out_mn :
1448  mmu_notifier_invalidate_range_only_end( & range)
1449  out :
1450  返回:ret
1451  out_unlock :
1452  自旋锁解锁
1453  返回:ret
调用者
名称描述
wp_huge_pmd`inline' is required to avoid gcc 4.1.2 build error