函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\asm-generic\atomic-long.h Create Date:2022-07-27 06:38:50
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:atomic_long_read

函数原型:static inline long atomic_long_read(const atomic_long_t *v)

返回类型:long

参数:

类型参数名称
const atomic_long_t *v
522  返回:atomic_read(v)
调用者
名称描述
show_mem
percpu_ref_switch_to_atomic_rcu
gen_pool_alloc_algo_ownergen_pool_alloc_algo_owner - allocate special memory from the pool*@pool: pool to allocate from*@size: number of bytes to allocate from the pool*@algo: algorithm passed from caller*@data: data passed to algorithm*@owner: optionally retrieve the chunk owner
gen_pool_avail获得内存池的可用空间
check_mm
get_work_pwq
get_work_poolget_work_pool - return the worker_pool a given work was associated with*@work: the work item of interest* Pools are created and destroyed under wq_pool_mutex, and allows read* access under RCU read lock. As such, this function should be
get_work_pool_idget_work_pool_id - return the worker pool ID a given work is associated with*@work: the work item of interest* Return: The worker_pool ID @work was last associated with.* %WORK_OFFQ_POOL_NONE if none.
work_is_canceling
calc_global_loadalc_load - update the avenrun load estimates 10 ticks after the* CPUs have updated calc_load_tasks.* Called from the global timer code.
__mutex_ownerInternal helper function; C doesn't allow us to hide it :/* DO NOT USE (outside of mutex code).
__mutex_trylock_or_ownerTrylock variant that retuns the owning task on failure.
__mutex_handoffGive up ownership to a specific task, when @task = NULL, this is equivalent* to a regular unlock. Sets PICKUP on a handoff, clears HANDOF, preserves* WAITERS. Provides RELEASE semantics like a regular unlock, the
ww_mutex_set_context_fastpathAfter acquiring lock with fastpath, where we do not hold wait_lock, set ctx* and wake up any waiters so they can recheck.
__mutex_unlock_slowpath
rwsem_test_oflagsTest the flags in the owner field.
__rwsem_set_reader_ownedThe task_struct pointer of the last owning reader will be left in* the owner field.* Note that the owner value just indicates the task has owned the rwsem* previously, it may not be the real owner or one of the real owners
rwsem_set_nonspinnableSet the RWSEM_NONSPINNABLE bits if the RWSEM_READER_OWNED flag* remains set. Otherwise, the operation will be aborted.
rwsem_ownerReturn just the real task structure pointer of the owner
rwsem_owner_flagsReturn the real task structure pointer of the owner and the embedded* flags in the owner. pflags must be non-NULL.
rwsem_mark_wakehandle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set
rwsem_try_write_lockThis function must be called with the sem->wait_lock held to prevent* race conditions between checking the rwsem wait list and setting the* If wstate is WRITER_HANDOFF, it will make sure that either the handoff
rwsem_down_read_slowpathWait for the read lock to be granted
rwsem_down_write_slowpathWait until we successfully acquire the write lock
rcu_torture_stats_printPrint torture statistics
free_eventUsed to free events which have a known refcount of 1, such as in error paths* where the event isn't exposed yet and inherited events.
perf_mmap
do_shrink_slab
node_page_stateDetermine the per node value of a stat item.
workingset_evictionworkingset_eviction - note the eviction of a page from memory*@target_memcg: the cgroup that is causing the reclaim*@page: the page being evicted* Returns a shadow entry to be stored in @page->mapping->i_pages in place
workingset_refaultworkingset_refault - evaluate the refault of a previously evicted page*@page: the freshly allocated replacement page*@shadow: shadow entry of the evicted page* Calculates and evaluates the refault distance of the previously* evicted page in the context of
vmalloc_nr_pages
__purge_vmap_area_lazyPurges all lazily-freed vmap areas.
get_swap_pages
si_swapinfo
hugetlb_report_usage
clear_hwpoisoned_pages
propagate_protected_usage
page_counter_set_maxpage_counter_set_max - set the maximum number of pages allowed*@counter: counter*@nr_pages: limit to set* Returns 0 on success, -EBUSY if the current number of pages on the* counter already exceeds the specified limit.
page_counter_set_minpage_counter_set_min - set the amount of protected memory*@counter: counter*@nr_pages: value to set* The caller must serialize invocations on the same counter.
page_counter_set_lowpage_counter_set_low - set the amount of protected memory*@counter: counter*@nr_pages: value to set* The caller must serialize invocations on the same counter.
memcg_events
mem_cgroup_oom_control_read
__memory_events_show
mem_cgroup_protectedmem_cgroup_protected - check if memory consumption is in the normal range*@root: the top ancestor of the sub-tree being checked*@memcg: the memory cgroup to check* WARNING: This function is not stateless! It can only be used as part
swap_events_show
zs_get_total_pages
get_io_contextget_io_context - increment reference count to io_context*@ioc: io_context to get* Increment reference count to @ioc.
put_io_contextput_io_context - put a reference of io_context*@ioc: io_context to put* Decrement reference count of @ioc and release it if the count reaches* zero.
ima_show_htable_value
__destroy_inode
sb_prepare_remount_readonly
__ns_get_path
fsnotify_unmount_inodessnotify_unmount_inodes - an sb is unmounting. handle any watched inodes.*@sb: superblock being unmounted.* Called during unmount with no locks held, so needs to be safe against* concurrent modifiers. We temporarily drop sb->s_inode_list_lock and CAN block.
ep_insertMust be called with "mtx" held.
io_account_mem
rwsem_is_lockedIn all implementations count != 0 means locked
zone_managed_pages
percpu_ref_is_zero判断percpu无计数引用
totalram_pages
get_io_context_active取得I/O活跃引用
global_numa_state
zone_numa_state_snapshot
global_zone_page_state
global_node_page_state
zone_page_state
zone_page_state_snapshotMore accurate version that also considers the currently pending* deltas. For that we need to loop over all cpus to find the current* deltas. There is no synchronization so the result cannot be* exactly accurate either.
get_mm_counterper-process(per-mm_struct) statistics.
mm_pgtables_bytes
frag_mem_limitMemory Tracking Functions.
page_counter_read
memcg_page_statedx can be of type enum memcg_stat_item or node_stat_item.* Keep in sync with memcg_exact_page_state().
lruvec_page_state
sk_memory_allocated
proto_memory_allocated
bdi_has_dirty_io
nfs_have_writebacks