Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\page-flags.h Create Date:2022-07-28 05:37:07
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:PageUptodate

Proto:static inline int PageUptodate(struct page *page)

Type:int

Parameter:

TypeParameterName
struct page *page
495  page = compound_head(page)
496  ret = st_bit - Determine whether a bit is set*@nr: bit number to test*@addr: Address to start counting from
505  If ret Then smp_rmb()
508  Return ret
Caller
NameDescribe
uprobe_write_opcodeNOTE:* Expect the breakpoint instruction to be the smallest size instruction for* the architecture
unaccount_page_cache_page
wait_on_page_bit_common
generic_file_buffered_readgeneric_file_buffered_read - generic file read routine*@iocb: the iocb to read*@iter: data destination*@written: already copied* This is a generic file read routine, and uses the* mapping->a_ops->readpage() function for the actual low-level stuff.
filemap_faultlemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault
filemap_map_pages
wait_on_page_read
do_read_cache_page
__set_page_dirty_nobuffersFor address_spaces which do not use buffers. Just tag the page as dirty in* the xarray.* This is also used when a single buffer is being dirtied: we want to set the* page dirty in that case, but not all the buffers. This is a "bottom-up"
do_swap_pageWe enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases
mincore_pageLater we can get more picky about what "in core" means precisely.* For now, simply check to see if the page is in the page cache,* and is up to date; i.e. that no page-in operation would be required
swap_readpage
add_to_swapadd_to_swap - allocate swap space for a page*@page: page we want to move to swap* Allocate swap space for the page and add the page to the* swap cache. Caller needs to hold the page lock.
ksm_might_need_to_copy
migrate_page_statesCopy the page to its new location
vfs_dedupe_get_pageRead a page's worth of file data into the page cache.
vfs_dedupe_file_range_compareCompare extents of two files to see if they are the same.* Caller must have locked both inodes to prevent write races.
page_get_linkget the link contents into pagecache
simple_write_begin
simple_write_endsimple_write_end - .write_end helper for non-block-device FSes*@file: See .write_end of address_space_operations*@mapping: "*@pos: "*@len: "*@copied: "*@page: "*@fsdata: "* simple_write_end does the minimum needed for updating a page after writing is
page_cache_pipe_buf_stealAttempt to steal a page from a pipe buffer. This should perhaps go into* a vm helper function, it's already simplified quite a bit by the* addition of remove_mapping(). If success is returned, the caller may
page_cache_pipe_buf_confirmCheck whether the contents of buf is OK to access. Since the content* is a page cache page, IO may be in flight.
__set_page_dirtyMark the page dirty, and set it dirty in the page cache, and mark the inode* dirty.* If warn is true, then emit a warning if the page is not uptodate and has* not been truncated.* The caller must hold lock_page_memcg().
init_page_buffersInitialise the state of a blockdev page's buffers.
create_empty_buffersWe attach and possibly dirty the buffers atomically wrt* __set_page_dirty_buffers() via private_lock. try_to_free_buffers* is already excluded via the page lock.
page_zero_new_buffersIf a page has any new buffers, zero them out here, and mark them uptodate* and dirty so they'll be written out (in order to prevent uninitialised* block data from leaking). And clear the new bit.
__block_write_begin_int
block_write_end
nobh_write_beginOn entry, the page is fully not uptodate.* On exit the page is fully uptodate in the areas outside (from,to)* The filesystem needs to handle block truncation upon failure.
nobh_truncate_page
block_truncate_page
do_mpage_readpageThis is the worker routine which does all the work of mapping the disk* blocks and constructs largest possible bios, submits them for IO if the* blocks are not contiguous on the disk
clean_buffersWe have our BIO, so we can now mark the buffers clean. Make* sure to only clean buffers which we know we'll be writing.
__mpage_writepage
verify_pageVerify a single data page against the file's Merkle tree
iomap_read_inline_data
__iomap_write_begin
__iomap_write_end
iomap_write_end_inline
iomap_page_mkwrite_actor
page_seek_hole_dataSeek for SEEK_DATA / SEEK_HOLE within @page, starting at @lastoff.* Returns true if found and updates @lastoff to the offset in file.