函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\page-flags.h Create Date:2022-07-27 06:40:02
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:PageLocked

函数原型:static inline __attribute__((__always_inline__)) int PageLocked(struct page *page)

返回类型:int

参数:

类型参数名称
struct page *page
312  返回:st_bit - Determine whether a bit is set*@nr: bit number to test*@addr: Address to start counting from
调用者
名称描述
page_cache_deleteLock ordering:* ->i_mmap_rwsem (truncate_pagecache)* ->private_lock (__free_pte->__set_page_dirty_buffers)* ->swap_lock (exclusive_swap_page, others)* ->i_pages lock* ->i_mutex* ->i_mmap_rwsem (truncate->unmap_mapping_range)* ->mmap_sem* ->i_mmap_rwsem
delete_from_page_cachedelete_from_page_cache - delete page from page cache*@page: the page which the kernel is trying to remove from page cache* This must be called only on pages that have been verified to be in the page* cache and locked
page_cache_delete_batchpage_cache_delete_batch - delete several pages from page cache*@mapping: the mapping to which pages belong*@pvec: pagevec with pages to delete* The function walks over mapping->i_pages and removes pages passed in @pvec* from the mapping
replace_page_cache_pageplace_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one
__add_to_page_cache_locked
unlock_pagelock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
filemap_map_pages
try_to_release_pagery_to_release_page() - release old fs-specific metadata on a page*@page: the page which the kernel is trying to free*@gfp_mask: memory allocation flags (and I/O mode)* The address_space is to try to release any data against the page
write_one_pagewrite_one_page - write out a single page and wait on I/O*@page: the page to write* The page must be locked by the caller and will be unlocked upon return
clear_page_dirty_for_ioClear a page's dirty flag, while caring for dirty memory accounting.* Returns true if the page was previously dirty.* This is for preparing to put the page under writeout. We leave the page* tagged as dirty in the xarray so that a concurrent write-for-sync
rotate_reclaimable_pageWriteback is about to end against a page which has been marked for immediate* reclaim. If it still appears to be reclaimable, move it to the tail of the* inactive list.
__remove_mappingSame as remove_mapping, but if the page is removed from the mapping, it* gets returned with a refcount of 0.
workingset_evictionworkingset_eviction - note the eviction of a page from memory*@target_memcg: the cgroup that is causing the reclaim*@page: the page being evicted* Returns a shadow entry to be stored in @page->mapping->i_pages in place
do_page_mkwriteNotify the address space that the page is about to become writable so that* it can prohibit this or wait for the page to get into an appropriate state.* We do this without the lock held, so that it can sleep if it needs to.
__do_faultThe mmap_sem must have been held on entry, and may have been* released depending on flags and vma->vm_ops->fault() return value.* See filemap_fault() and __lock_page_retry().
mlock_vma_pageMark page as mlocked if not already.* If page on LRU, isolate and putback to move to unevictable list.
munlock_vma_pagemunlock_vma_page - munlock a vma page*@page: page to be unlocked, either a normal page or THP page head* returns the size of the page as a page mask (0 for normal page,* HPAGE_PMD_NR - 1 for THP head page)
__putback_lru_fast_preparePrepare page for fast batched LRU putback via putback_lru_evictable_pagevec()* The fast path is available only for evictable pages with single mapping.* Then we can bypass the per-cpu pvec and get better performance.
page_mkclean
page_move_anon_rmappage_move_anon_rmap - move a page to our anon_vma*@page: the page to move to our anon_vma*@vma: the vma the page belongs to* When a page belongs exclusively to one process after a COW event,* that page can be moved into the anon_vma that belongs to just
do_page_add_anon_rmapSpecial version of the above for do_swap_page, which often runs* into pages that are exclusively owned by the current process.* Everybody else should continue to use page_add_anon_rmap above.
page_add_file_rmappage_add_file_rmap - add pte mapping to a file page*@page: the page to add the mapping to*@compound: charge the page as compound or small page* The caller needs to hold the pte lock.
try_to_munlockry_to_munlock - try to munlock a page*@page: the page to be munlocked* Called from munlock code. Checks all of the VMAs mapping the page* to make sure nobody else has this page mlocked. The page will be
rmap_walk_filemap_walk_file - do something to file page using the object-based rmap method*@page: the page to be handled*@rwc: control variable according to each walk type* Find all the mappings of a page using the mapping pointer and the vma chains
hugepage_add_anon_rmapThe following two functions are for anonymous (private mapped) hugepages.* Unlike common anonymous pages, anonymous hugepages have no accounting code* and no lru code, because we handle hugepages differently from common pages.
swap_readpage
add_to_swap_cacheadd_to_swap_cache resembles add_to_page_cache_locked on swapper_space,* but sets SwapCache flag and private instead of mapping and index.
__delete_from_swap_cacheThis must be called only on pages that have* been verified to be in the swap cache.
add_to_swapadd_to_swap - allocate swap space for a page*@page: page we want to move to swap* Allocate swap space for the page and add the page to the* swap cache. Caller needs to hold the page lock.
reuse_swap_pageWe can write to an anon page without COW if there are no other references* to it. And as a side-effect, free up its swap: because the old content* on disk will never be read, and seeking back there to write new content
try_to_free_swapIf swap is getting full, or if there are no more mappings of this page,* then try_to_free_swap is called to free its swap space.
__frontswap_store"Store" data from a page to frontswap and associate it with the page's* swaptype and offset
__frontswap_load"Get" data from frontswap associated with swaptype and offset that were* specified when the data was put to frontswap and use it to fill the* specified page with data. Page must be locked and in the swap cache.
rmap_walk_ksm
ksm_migrate_page
putback_movable_pageIt should be called on page which is PG_movable
move_to_new_pageMove a page to a newly allocated page* The page is locked and all ptes have been successfully removed.* The new page will have replaced the old page if this function* is successful.* Return value:* < 0 - error code* MIGRATEPAGE_SUCCESS - success
do_huge_pmd_numa_pageNUMA hinting page fault entry point for trans huge pmds
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
__collapse_huge_page_isolate
khugepaged_scan_pmd
mem_cgroup_try_chargemem_cgroup_try_charge - try charging a page*@page: page to charge*@mm: mm context of the victim*@gfp_mask: reclaim mode*@memcgp: charged memcg return*@compound: charge the page as compound or small page* Try to charge @page to the memcg that @mm belongs
mem_cgroup_migratemem_cgroup_migrate - charge a page's replacement*@oldpage: currently circulating page*@newpage: replacement page* Charge @newpage as a replacement page for @oldpage. @oldpage will* be uncharged upon free.
mem_cgroup_swap_full
__cleancache_get_page"Get" data from cleancache associated with the poolid/inode/index* that were specified when the data was put to cleanache and, if* successful, use it to fill the specified page with data and return 0.
__cleancache_put_page"Put" data from a page to cleancache and associate it with the* (previously-obtained per-filesystem) poolid and the page's,* inode and page index. Page must be locked. Note that a put_page
__cleancache_invalidate_pageInvalidate any data from cleancache associated with the poolid and the* page's inode and page index so that a subsequent "get" will fail
__free_zspage
z3fold_page_migrate
buffer_check_dirty_writebackReturns if the page has dirty or writeback buffers. If all the buffers* are unlocked and clean then the PageDirty information is stale. If* any of the pages are locked, it is assumed they are locked for IO.
grow_dev_pageCreate the page-cache page that contains the requested block.* This is used purely for blockdev mappings.
block_invalidatepagelock_invalidatepage - invalidate part or all of a buffer-backed page*@page: the page which is affected*@offset: start of the range to invalidate*@length: length of the range to invalidate* block_invalidatepage() is called when all or part of the page has
create_page_buffers
page_zero_new_buffersIf a page has any new buffers, zero them out here, and mark them uptodate* and dirty so they'll be written out (in order to prevent uninitialised* block data from leaking). And clear the new bit.
__block_write_begin_int
attach_nobh_buffersAttach the singly-linked list of buffers created by nobh_write_begin, to* the page (converting it to circular linked list and taking care of page* dirty races).
try_to_free_buffers
fscrypt_encrypt_pagecache_blocksscrypt_encrypt_pagecache_blocks() - Encrypt filesystem blocks from a pagecache page*@page: The locked pagecache page containing the block(s) to encrypt*@len: Total size of the block(s) to encrypt. Must be a nonzero* multiple of the filesystem's block size.
fscrypt_decrypt_pagecache_blocksscrypt_decrypt_pagecache_blocks() - Decrypt filesystem blocks in a pagecache page*@page: The locked pagecache page containing the block(s) to decrypt*@len: Total size of the block(s) to decrypt. Must be a nonzero* multiple of the filesystem's block size.
verify_pageVerify a single data page against the file's Merkle tree
iomap_writepage_mapWe implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide
wait_on_page_lockedWait for a page to be unlocked.* This must be called with the caller "holding" the page,* ie with increased "page->count" so that the page won't* go away during the wait..
wait_on_page_locked_killable
make_migration_entry
migration_entry_to_page
make_hwpoison_entrySupport for hardware poisoned pages