函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\hugetlb.c Create Date:2022-07-27 16:57:12
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:PageHuge() only returns true for hugetlbfs pages, but not for normal or* transparent huge pages. See the PageTransHuge() documentation for more* details.

函数原型:int PageHuge(struct page *page)

返回类型:int

参数:

类型参数名称
struct page *page
1299  如果非PageCompound(page)则返回:0
1302  page等于compound_head(page)
1303  返回: First tail page only 恒等于HUGETLB_PAGE_DTOR
调用者
名称描述
unaccount_page_cache_page
page_cache_free_page
replace_page_cache_pageplace_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one
__add_to_page_cache_locked
__put_compound_page
page_mappedReturn true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped.
__page_mapcountSlow path of page_mapcount() for compound pages
new_non_cma_page
check_and_migrate_cma_pages
page_vma_mapped_walkpage_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma
page_remove_file_rmap
page_remove_anon_compound_rmap
try_to_unmap_one@arg: enum ttu_flags will be passed to this argument
has_unmovable_pagesThis function checks whether pageblock includes unmovable pages or not.* If @count is not zero, it is okay to include less @count unmovable pages* PageLRU check without isolation or lru_lock could race so that
page_trans_huge_map_swapcount
page_huge_activeTest to determine whether the hugepage is "active/in-use" (i.e. being linked* to hstate->hugepage_activelist.)* This function can be called for tail pages, but never returns true for them.
PageHugeTemporaryInternal hugetlb specific page flag. Do not use outside of the hugetlb* code
__basepage_index
dissolve_free_huge_pageDissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
alloc_new_node_pagepage allocation callback for NUMA node migration
new_pageAllocate a new page for page migration based on vma policy
putback_movable_pagesPut previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page
remove_migration_pteRestore a potential migration pte to a working pte entry
copy_huge_page
migrate_page_copy
migrate_pagesmigrate_pages - migrate the pages specified in a list, to the free pages* supplied as the target for the page migration*@from: The list of pages to be migrated.*@get_new_page: The function used to allocate free pages to be used
add_page_for_migrationResolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist
total_mapcount
page_trans_huge_mapcountThis calculates accurately how many mappings a transparent hugepage* has (unlike page_mapcount() which isn't fully accurate)
hugetlb_cgroup_migratehugetlb_lock will make sure a parallel cgroup rmdir won't happen* when we migrate hugepages
shake_pageWhen a unknown page type is encountered drain as many buffers as possible* in the hope to turn the page into a LRU or free page, which we can handle.
me_huge_pageHuge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd.
get_hwpoison_pageget_hwpoison_page() - Get refcount for memory error handling:*@page: raw error page (hit by memory error)* Return: return 0 if failed to grab the refcount, otherwise true (some* non-zero value.)
hwpoison_user_mappingsDo all that is necessary to remove user space mappings. Unmap* the pages and send SIGBUS to the processes if the data was dirty.
memory_failurememory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page
unpoison_memorypoison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier
__get_any_pageSafely get reference count of an arbitrary page.* Returns 0 for a free page, -EIO for a zero refcount page* that is not free, and 1 for any other page type.* For 1 the page is returned with increased page count, otherwise not.
get_any_page
soft_offline_in_use_page
hwpoison_inject
page_cache_deleteLock ordering:* ->i_mmap_rwsem (truncate_pagecache)* ->private_lock (__free_pte->__set_page_dirty_buffers)* ->swap_lock (exclusive_swap_page, others)* ->i_pages lock* ->i_mutex* ->i_mmap_rwsem (truncate->unmap_mapping_range)* ->mmap_sem* ->i_mmap_rwsem
find_subpage
page_hstate
new_page_nodemask