函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\xarray.h Create Date:2022-07-27 06:43:47
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:xa_is_value() - Determine if an entry is a value.*@entry: XArray entry.* Context: Any context.* Return: True if the entry is a value, false if it is a pointer.

函数原型:static inline bool xa_is_value(const void *entry)

返回类型:bool

参数:

类型参数名称
const void *entry
79  返回:entry按位与1
调用者
名称描述
radix_tree_extendExtend a radix tree so it can store key @index.
insert_entries
__radix_tree_replace
__radix_tree_delete
xas_expandxas_expand adds nodes to the head of the tree until it has reached* sufficient height to be able to contain @xas->xa_index
xas_storexas_store() - Store this entry in the XArray
ida_alloc_rangeda_alloc_range() - Allocate an unused ID.*@ida: IDA handle.*@min: Lowest ID to allocate.*@max: Highest ID to allocate.*@gfp: Memory allocation flags.* Allocate an ID between @min and @max, inclusive. The allocated ID will
ida_freeda_free() - Release an allocated ID.*@ida: IDA handle.*@id: Previously allocated ID.* Context: Any context.
ida_destroy释放所有缓存层内IDA树
__check_store_iter
page_cache_delete_batchpage_cache_delete_batch - delete several pages from page cache*@mapping: the mapping to which pages belong*@pvec: pagevec with pages to delete* The function walks over mapping->i_pages and removes pages passed in @pvec* from the mapping
filemap_range_has_pagelemap_range_has_page - check if a page exists in range
__add_to_page_cache_locked
page_cache_next_misspage_cache_next_miss() - Find the next gap in the page cache.*@mapping: Mapping.*@index: Index.*@max_scan: Maximum range to search.* Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the* gap with the lowest index.
page_cache_prev_misspage_cache_prev_miss() - Find the previous gap in the page cache.*@mapping: Mapping.*@index: Index.*@max_scan: Maximum range to search.* Search the range [max(index - max_scan + 1, 0), index] for the* gap with the highest index.
find_get_entryd_get_entry - find and get a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset
find_lock_entryd_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased
pagecache_get_pagepagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset.
find_get_entriesd_get_entries - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page cache index*@nr_entries: The maximum number of entries*@entries: Where the resulting entries are placed*@indices: The cache indices corresponding to the
find_get_pages_ranged_get_pages_range - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page index*@end: The final page index (inclusive)*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed*
find_get_pages_contigd_get_pages_contig - gang contiguous pagecache lookup*@mapping: The address_space to search*@index: The starting page index*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* find_get_pages_contig() works exactly like
find_get_pages_range_tagd_get_pages_range_tag - find and return pages in given range matching @tag*@mapping: the address_space to search*@index: the starting page index*@end: The final page index (inclusive)*@tag: the tag index*@nr_pages: the maximum number of pages*@pages:
filemap_map_pages
__do_page_cache_readahead__do_page_cache_readahead() actually reads a chunk of disk. It allocates* the pages first, then submits them for I/O. This avoids the very bad* behaviour which would occur if page allocations are causing VM writeback.
pagevec_remove_exceptionalspagevec_remove_exceptionals - pagevec exceptionals pruning*@pvec: The pagevec to prune* pagevec_lookup_entries() fills both pages and exceptional radix* tree entries into the pagevec
truncate_exceptional_pvec_entriesUnconditionally remove exceptional entries. Usually called from truncate* path. Note that the pagevec may be altered by this function by removing* exceptional entries similar to what pagevec_remove_exceptionals does.
truncate_inode_pages_rangeruncate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that
invalidate_mapping_pagesvalidate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only
invalidate_inode_pages2_rangevalidate_inode_pages2_range - remove range of pages from an address_space*@mapping: the address_space*@start: the page offset 'from' which to invalidate*@end: the page offset 'to' which to invalidate (inclusive)* Any pages which are found to be mapped
memfd_tag_pins
memfd_wait_for_pinsSetting SEAL_WRITE requires us to verify there's no pending writer. However,* via get_user_pages(), drivers might have some pending I/O without any active* user-space mappings (eg., direct-IO, AIO). Therefore, we look at all pages
get_unlocked_entryLook up entry in page cache, wait for it to become unlocked if it* is a DAX entry and return it. The caller must subsequently call* put_unlocked_entry() if it did not lock the entry or dax_unlock_entry()* if it did
grab_mapping_entryFind page cache entry at given index. If it is a DAX entry, return it* with the entry locked. If the page cache doesn't contain an entry at* that index, add a locked empty entry.* When requesting an entry with size DAX_PMD, grab_mapping_entry() will
dax_layout_busy_pagedax_layout_busy_page - find first pinned page in @mapping*@mapping: address space to scan for a page with ref count > 1* DAX requires ZONE_DEVICE mapped pages. These pages are never* 'onlined' to the page allocator so they are considered idle when
__dax_invalidate_entry
dax_writeback_one