Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\mm.h Create Date:2022-07-28 05:43:29
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:get_page

Proto:static inline void get_page(struct page *page)

Type:void

Parameter:

TypeParameterName
struct page *page
1003  page = compound_head(page)
1008  VM_BUG_ON_PAGE(127: arbitrary random number, small enough to assemble well (page), page)
1009  page_ref_inc(page)
Caller
NameDescribe
copy_page_to_iter_pipe
__pipe_get_pages
iov_iter_get_pages
iov_iter_get_pages_alloc
relay_buf_faultault() vm_op implementation for relay file mapping.
perf_mmap_fault
__replace_page__replace_page - replace page in vma by new page
replace_page_cache_pageplace_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one
__add_to_page_cache_locked
write_one_pagewrite_one_page - write out a single page and wait on I/O*@page: the page to write* The page must be locked by the caller and will be unlocked upon return
get_kernel_pagesget_kernel_pages() - pin kernel pages in memory*@kiov: An array of struct kvec structures*@nr_segs: number of segments to pin*@write: pinning for read/write, currently ignored*@pages: array that receives pointers to the pages pinned.
rotate_reclaimable_pageWriteback is about to end against a page which has been marked for immediate* reclaim. If it still appears to be reclaimable, move it to the tail of the* inactive list.
__lru_cache_add
deactivate_pagedeactivate_page - deactivate a page*@page: page to deactivate* deactivate_page() moves @page to the inactive list if @page was on the active* list and was not an unevictable page. This is done to accelerate the reclaim* of @page.
mark_page_lazyfreemark_page_lazyfree - make an anon page lazyfree*@page: page to deactivate* mark_page_lazyfree() moves @page to the inactive file list.* This is done to accelerate the reclaim of @page.
lru_add_page_tailsed by __split_huge_page_refcount()
invalidate_mapping_pagesvalidate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only
isolate_lru_pagesolate_lru_page - tries to isolate a page from its LRU list*@page: page to isolate from its LRU list* Isolates a @page from an LRU list, clears PageLRU and adjusts the* vmstat statistic corresponding to whatever LRU list the page was on.
follow_page_pte
copy_one_ptepy one vm_area from one task to the other. Assumes the page tables* already present in the new task to be cleared in the whole range* covered by this vma.
insert_pageThis is the old fallback for page remapping.* For historical reasons, it only allows reserved pages. Only* old drivers should use this, and they needed to mark their* pages reserved for the old functions anyway.
wp_page_shared
do_wp_pageThis routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been
numa_migrate_prep
__munlock_isolate_lru_pageIsolate a page from LRU with optional get_page() pin.* Assumes lru_lock already held and page already pinned.
__munlock_pagevecMunlock a batch of pages from the same zone* The work is split to two main phases
__munlock_pagevec_fillFill up pagevec for __munlock_pagevec using pte walk* The function expects that the struct page corresponding to @start address is* a non-TPH page already pinned and in the @pvec, and that it belongs to @zone
special_mapping_fault
madvise_cold_or_pageout_pte_range
madvise_free_pte_range
unuse_pteNo need to decide whether this PTE shares the swap entry with others,* just let do_wp_page work it out if a write is requested later - to* force COW, vm_page_prot omits write permission from any private vma.
copy_hugetlb_page_range
hugetlb_cowHugetlb_cow() should be called with page lock of the original hugepage held.* Called with hugetlb_instantiation_mutex held and pte_page locked so we* cannot race with other handlers or page migration.
hugetlb_fault
follow_hugetlb_page
follow_huge_pmd
replace_pageplace_page - replace page in vma by new ksm page*@vma: vma that holds the pte pointing to page*@page: the page we are replacing by kpage*@kpage: the ksm page we replace page by*@orig_pte: the original value of the pte
stable_tree_searchstable_tree_search - search for page inside the stable tree* This function checks if there is a page inside the stable tree* with identical content to the page that we are scanning right now
remove_migration_pteRestore a potential migration pte to a working pte entry
migrate_huge_page_move_mappingThe expected number of remaining references is the same as that* of migrate_page_move_mapping().
__buffer_migrate_page
follow_devmap_pmd
copy_huge_pmd
do_huge_pmd_wp_page
follow_trans_huge_pmd
do_huge_pmd_numa_pageNUMA hinting page fault entry point for trans huge pmds
madvise_free_huge_pmdReturn true if we do MADV_FREE successfully on entire pmd page.* Otherwise, return false.
get_mctgt_type_thpWe don't consider PMD mapped swapping or file mapped pages because THP does* not support them for now.* Caller should make sure that pmd_trans_huge(pmd) is true.
z3fold_page_migrate
sel_mmap_policy_fault
dio_refill_pagesGo grab and pin some userspace pages. Typically we'll get 64 at a time.
dio_bio_add_pageAttempt to put the current chunk of 'cur_page' into the current BIO. If* that was successful then update final_block_in_bio and take a ref against* the just-added page.* Return zero on success. Non-zero means the caller needs to start a new BIO.
submit_page_sectionAn autonomous function to put a chunk of a page under deferred IO.* The caller doesn't actually know (or care) whether this piece of page is in* a BIO, or is under IO or whatever. We just take care of all possible * situations here
aio_migratepage
iomap_page_create
iomap_migrate_page
iomap_dio_zero