Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\page-flags.h Create Date:2022-07-28 05:36:04
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:compound_head

Proto:static inline struct page *compound_head(struct page *page)

Type:struct page

Parameter:

TypeParameterName
struct page *page
174  head = READ_ONCE( Bit zero is set )
176  If Value for the false possibility is greater at compile time(head & 1) Then Return (struct page * )(head - 1)
178  Return page
Caller
NameDescribe
page_copy_sane
get_futex_keyget_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored
__replace_page__replace_page - replace page in vma by new page
put_and_wait_on_page_lockedput_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked*@page: The page to wait for
unlock_pagelock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
__lock_page__lock_page - get a lock on the page, assuming we need to sleep to get it*@__page: the page to lock
__lock_page_killable
pagecache_get_pagepagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset.
filemap_faultlemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault
set_page_dirtyDirty a page
activate_page
mark_page_accessedMark a page as having seen activity.* inactive,unreferenced -> inactive,referenced* inactive,referenced -> active,unreferenced* active,unreferenced -> active,referenced* When a newly allocated page is not yet visible, so safe for non-atomic ops,
release_pageslease_pages - batched put_page()*@pages: array of pages to release*@nr: number of pages* Decrement the reference count on all the pages in @pages. If it* fell to zero, remove the page from the LRU and free it.
page_rmappingNeutral page->mapping pointer to address_space or anon_vma or other
page_mappedReturn true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped.
page_anon_vma
page_mapping
__page_mapcountSlow path of page_mapcount() for compound pages
put_user_pages_dirty_lockput_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages*@pages: array of pages to be maybe marked dirty, and definitely released
check_and_migrate_cma_pages
page_move_anon_rmappage_move_anon_rmap - move a page to our anon_vma*@page: the page to move to our anon_vma*@vma: the vma the page belongs to* When a page belongs exclusively to one process after a COW event,* that page can be moved into the anon_vma that belongs to just
page_add_file_rmappage_add_file_rmap - add pte mapping to a file page*@page: the page to add the mapping to*@compound: charge the page as compound or small page* The caller needs to hold the pte lock.
page_remove_rmappage_remove_rmap - take down pte mapping from a page*@page: page to remove mapping from*@compound: uncharge the page as compound or small page* The caller needs to hold the pte lock.
free_tail_pages_check
has_unmovable_pagesThis function checks whether pageblock includes unmovable pages or not.* If @count is not zero, it is okay to include less @count unmovable pages* PageLRU check without isolation or lru_lock could race so that
madvise_inject_errorError injection support for memory error handling.
page_swapped
page_trans_huge_map_swapcount
reuse_swap_pageWe can write to an anon page without COW if there are no other references* to it. And as a side-effect, free up its swap: because the old content* on disk will never be read, and seeking back there to write new content
try_to_free_swapIf swap is getting full, or if there are no more mappings of this page,* then try_to_free_swap is called to free its swap space.
PageHugePageHuge() only returns true for hugetlbfs pages, but not for normal or* transparent huge pages. See the PageTransHuge() documentation for more* details.
__basepage_index
dissolve_free_huge_pageDissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
migrate_page_add
alloc_new_node_pagepage allocation callback for NUMA node migration
new_pageAllocate a new page for page migration based on vma policy
cmp_and_merge_pagemp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and
add_page_for_migrationResolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist
get_deferred_split_queue
__split_huge_page
page_trans_huge_mapcountThis calculates accurately how many mappings a transparent hugepage* has (unlike page_mapcount() which isn't fully accurate)
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
deferred_split_huge_page
deferred_split_scan
mem_cgroup_try_chargemem_cgroup_try_charge - try charging a page*@page: page to charge*@mm: mm context of the victim*@gfp_mask: reclaim mode*@memcgp: charged memcg return*@compound: charge the page as compound or small page* Try to charge @page to the memcg that @mm belongs
add_to_killSchedule a process for later kill.* Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
me_huge_pageHuge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd.
get_hwpoison_pageget_hwpoison_page() - Get refcount for memory error handling:*@page: raw error page (hit by memory error)* Return: return 0 if failed to grab the refcount, otherwise true (some* non-zero value.)
memory_failure_hugetlb
memory_failurememory_failure - Handle memory failure of a page.*@pfn: Page Number of the corrupted page*@flags: fine tune action taken* This function is called by the low level machine check code* of an architecture when it detects hardware memory corruption* of a page
unpoison_memorypoison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier
soft_offline_huge_page
soft_offline_in_use_page
hwpoison_inject
check_heap_object