函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\asm-generic\atomic-instrumented.h Create Date:2022-07-27 06:38:46
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:atomic_read

函数原型:static inline int atomic_read(const atomic_t *v)

返回类型:int

参数:

类型参数名称
const atomic_t *v
26  kasan_check_read(v, v的长度)
27  返回:arch_atomic_read - read atomic variable*@v: pointer of type atomic_t* Atomically reads the value of @v.
调用者
名称描述
current_is_single_threadedReturns true if the task does not share ->mm with another thread/process.
rhashtable_shrinkhashtable_shrink - Shrink hash table while allowing concurrent lookups*@ht: the hash table to shrink* This function shrinks the hash table to fit, i
refcount_dec_not_one_dec_not_one - decrement a refcount if it is not 1*@r: the refcount* No atomic_t counterpart, it decrements unless the value is 1, in which case* it will return false
test_bucket_stats
threadfunc
test_rht_init
fail_dump
should_failThis code is stolen from failmalloc-1.0* http://www.nongnu.org/failmalloc/
sbq_wake_ptr
sbitmap_queue_wake_all
sbitmap_queue_show
arch_show_interrupts/proc/interrupts printing for arch specific interrupts
arch_irq_stat
tboot_wait_for_aps
tboot_dying_cpu
mce_default_notifier
mce_timed_outCheck if a timeout waiting for other CPUs happened.
mce_startStart of Monarch synchronization. This waits until all CPUs have* entered the exception handler and then determines if any of them* saw a fatal event that requires panic. Then it executes them* in the entry order.* TBD double check parallel CPU hotunplug
mce_endSynchronize between CPUs after main scanning loop.* This invokes the bulk of the Monarch processing.
mce_adjust_timer_default
__wait_for_cpus
free_all_child_rdtgrp
rmdir_all_subForcibly remove all of subdirectories under root.
smp_stop_nmi_callback
reserve_eilvt_offset
kgdb_nmi_handler
__kgdb_notify
__mmput
mm_releasePlease note the differences between mmput and mm_release
copy_process创建进程
unshare_fdUnshare file descriptor table if it is being shared
mm_update_next_ownerA task is exiting. If it owned this mm, find a new owner for the mm.
tasklet_action_common
__sigqueue_allocallocate a new signal queue record* - this may be called without locks if and only if t == current, otherwise an* appropriate lock must be held to stop the target task from exiting
__usermodehelper_disable__usermodehelper_disable - Prevent new helpers from being started.*@depth: New value to assign to usermodehelper_disabled.* Set usermodehelper_disabled to @depth and wait for running helpers to exit.
__need_more_workerPolicy functions. These define the policies on how the global worker* pools are managed. Unless noted otherwise, these functions assume that* they're being called with pool->lock held.
keep_workingDo I need to keep working? Called from currently running workers.
worker_enter_idleworker_enter_idle - enter idle state*@worker: worker which is entering idle state*@worker is entering idle state. Update stats and idle timer if* necessary.* LOCKING:* spin_lock_irq(pool->lock).
flush_workqueue_prep_pwqslush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing*@wq: workqueue being flushed*@flush_color: new flush color, < 0 for no-op*@work_color: new work color, < 0 for no-op* Prepare pwqs for workqueue flushing
put_cred_rcuThe RCU callback to actually dispose of a set of credentials
__put_cred__put_cred - Destroy a set of credentials*@cred: The record to release* Destroy a set of credentials on which no references remain.
exit_credsClean up a task's credentials when it exits
copy_creds复制信任
commit_credsmmit_creds - Install new credentials upon the current task*@new: The credentials to be assigned* Install a new set of credentials to the current task, using RCU to replace* the old set. Both the objective and the subjective credentials pointers are
abort_credsabort_creds - Discard a set of credentials and unlock the current task*@new: The credentials that were going to be applied* Discard a set of credentials that were under construction and unlock the* current task.
override_credsverride_creds - Override the current process's subjective credentials*@new: The credentials to be assigned* Install a set of temporary override subjective credentials on the current* process, returning the old set for later reversion.
revert_credsvert_creds - Revert a temporary subjective credentials override*@old: The credentials to be restored* Revert a temporary set of override subjective credentials to an old set,* discarding the override set.
async_schedule_node_domainasync_schedule_node_domain - NUMA specific version of async_schedule_domain*@func: function to execute asynchronously*@data: data pointer to pass to the function*@node: NUMA node that we want to schedule this on or close to*@domain: the domain
cpu_report_stateCalled to poll specified CPU's state, for example, when waiting for* a CPU to come online.
cpu_check_up_prepareIf CPU has died properly, set its state to CPU_UP_PREPARE and* return success
atomic_inc_below
__request_module__request_module - try to load a kernel module*@wait: wait (or not) for the operation to complete*@fmt: printf style format string for the name of the module*@...: arguments as specified in the format string
nr_iowait_cpuConsumers of these two interfaces, like for example the cpuidle menu* governor, are using nonsensical data. Preferring shallow idle state selection* for a CPU that has IO-wait which might not even end up running the task when* it does become runnable.
account_idle_timeAccount for idle time.*@cputime: the CPU time spent in idle wait
cpupri_findpupri_find - find the best (lowest-pri) CPU in the system*@cp: The cpupri context*@p: The task*@lowest_mask: A mask to fill in with selected CPUs (or NULL)* Note: This function returns the recommended CPUs as calculated during the* current invocation
__free_domain_allocs
claim_allocationsNULL the sd_data elements we've used to build the sched_domain and* sched_group structure so that the subsequent __free_domain_allocs()* will not free the data we're using.
ipi_sync_rq_state
membarrier_private_expedited
sync_runqueues_membarrier_state
membarrier_register_global_expedited
membarrier_register_private_expedited
osq_wait_nextGet a stable @node->next pointer, either for unlock() or unqueue() purposes.* Can return NULL in case we were the last queued and we updated @lock instead.
queued_write_lock_slowpathqueued_write_lock_slowpath - acquire write lock of a queue rwlock*@lock : Pointer to queue rwlock structure
lock_torture_cleanupForward reference.
hib_wait_io
crc32_threadfnCRC32 update function that runs in its own thread.
lzo_compress_threadfnCompression function that runs in its own thread.
save_image_lzosave_image_lzo - Save the suspend image data compressed with LZO.*@handle: Swap map handle to use for saving the image.*@snapshot: Image to read data from.*@nr_to_write: Number of pages to save.
lzo_decompress_threadfnDeompression function that runs in its own thread.
load_image_lzoload_image_lzo - Load compressed image data and decompress them with LZO.*@handle: Swap map handle to use for loading data.*@snapshot: Image to copy uncompressed data into.*@nr_to_read: Number of pages to load.
printk_safe_log_storeAdd a message to per-CPU context-dependent buffer
__printk_safe_flushFlush data from the associated per-CPU buffer. The function* can be called either via IRQ work or independently.
synchronize_hardirqsynchronize_hardirq - wait for pending hard IRQ handlers (on other CPUs)*@irq: interrupt number to wait for* This function waits for any pending hard IRQ handlers for this* interrupt to complete before returning
synchronize_irqsynchronize_irq - wait for pending IRQ handlers (on other CPUs)*@irq: interrupt number to wait for* This function waits for any pending IRQ handlers for this interrupt* to complete before returning. If you use this function while
note_interrupt
rcu_gp_is_expeditedShould normal grace-period primitives be expedited? Intended for* use within RCU. Note that this function takes the rcu_expedited* sysfs/boot variable and rcu_scheduler_active into account as well* as the rcu_expedite_gp() nesting
rcu_torture_stats_printPrint torture statistics
rcu_torture_barrierkthread function to drive and coordinate RCU barrier testing.
rcu_torture_cleanup
rcu_perf_wait_shutdownIf performance tests complete, wait for shutdown to commence.
rcu_perf_writerRCU perf writer kthread. Repeatedly does a grace period.
rcu_perf_shutdownRCU perf shutdown kthread. Just waits to be awakened, then shuts* down system.
rcu_perf_init
rcu_dynticks_eqs_onlineReset the current CPU's ->dynticks counter to indicate that the* newly onlined CPU is no longer in an extended quiescent state
rcu_dynticks_curr_cpu_in_eqsIs the current CPU in an extended quiescent state?* No ordering, as we are sampling CPU-local information.
rcu_eqs_special_setSet the special (bottom) bit of the specified CPU so that it* will take special action (such as flushing its TLB) on the* next exit from an extended quiescent state. Returns true if* the bit was successfully set, or false if the CPU was not in
rcu_eqs_enterEnter an RCU extended quiescent state, which can be either the* idle loop or adaptive-tickless usermode execution.* We crowbar the ->dynticks_nmi_nesting field to zero to allow for* the possibility of usermode upcalls having messed up our count
rcu_nmi_exit_commonIf we are returning from the outermost NMI handler that interrupted an* RCU-idle period, update rdp->dynticks and rdp->dynticks_nmi_nesting* to let the RCU grace-period handling know that the CPU is back to* being RCU-idle
rcu_eqs_exitExit an RCU extended quiescent state, which can be either the* idle loop or adaptive-tickless usermode execution.* We crowbar the ->dynticks_nmi_nesting field to DYNTICK_IRQ_NONIDLE to* allow for the possibility of usermode upcalls messing up our count of
rcu_nmi_enter_common_nmi_enter_common - inform RCU of entry to NMI context*@irq: Is this call from rcu_irq_enter?* If the CPU was idle from RCU's viewpoint, update rdp->dynticks and* rdp->dynticks_nmi_nesting to let the RCU grace-period handling know* that the CPU is active
rcu_barrier_traceHelper function for rcu_barrier() tracing. If tracing is disabled,* the compiler is expected to optimize this away.
cgroup_destroy_root
cgroup_setup_root
css_task_iter_advance
proc_cgroupstats_showDisplay information about each subsystem and each hierarchy
cgroup_subsys_states_read
audit_log_lostaudit_log_lost - conditionally log lost audit message event*@message: the message stating reason for lost audit message* Emit at least 1 message per second, even if audit_rate_check is* throttling.* Always increment the lost messages counter.
audit_receive_msg
kgdb_io_readyReturn true if there is a valid kgdb I/O module
kgdb_reenter_check
kgdb_cpu_enter
kgdb_console_write
kgdb_schedule_breakpoint
getthread
kdb_disable_nmi
kdb_common_init_state
kdb_stub
ring_buffer_resizeg_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure.
ring_buffer_lock_reserveg_buffer_lock_reserve - reserve a part of the buffer*@buffer: the ring buffer to reserve from*@length: the length of the data to reserve (excluding event header)* Returns a reserved event on the ring buffer to copy directly to
ring_buffer_writeg_buffer_write - write data to the buffer without reserving*@buffer: The ring buffer to write to
ring_buffer_record_offg_buffer_record_off - stop all writes into the buffer*@buffer: The ring buffer to stop writes to
ring_buffer_record_ong_buffer_record_on - restart writes into the buffer*@buffer: The ring buffer to start writes to
ring_buffer_record_is_ong_buffer_record_is_on - return true if the ring buffer can write*@buffer: The ring buffer to see if write is enabled* Returns true if the ring buffer is in a state that it accepts writes.
ring_buffer_record_is_set_ong_buffer_record_is_set_on - return true if the ring buffer is set writable*@buffer: The ring buffer to see if write is set enabled* Returns true if the ring buffer is set writable by ring_buffer_record_on().
tracing_record_taskinfo_skip
function_trace_call
start_critical_timing
stop_critical_timing
ftrace_pop_return_traceRetrieve a function return address to the trace stack on thread info.
trace_synth
____bpf_send_signal
__irq_work_queue_localEnqueue on current CPU, work must already be claimed and preempt disabled
irq_work_syncSynchronize against the irq_work @entry, ensures the entry is not* currently in use.
stack_map_get_build_id_offset
__perf_event_task_sched_outCalled from scheduler to remove the events of the current task,* with interrupts disabled
__perf_event_task_sched_inCalled from scheduler to add the events of the current task* with interrupts disabled.* We restore the event value and then enable it.* This does not protect us against NMI, but enable()* sets the enabled bit in the control field of event _before_
perf_mmap_closeA buffer can be mmap()ed multiple times; either directly through the same* event, or through other events by use of perf_event_set_output().* In order to undo the VM accounting done by perf_mmap() we need to destroy
perf_event_task
perf_event_comm
perf_event_namespaces
perf_event_mmap
perf_event_ksymbol
perf_event_bpf_event
__perf_event_overflowGeneric event overflow handling, sampling.
account_event
perf_event_set_output
perf_aux_output_beginThis is called before hardware starts writing to the AUX area to* obtain an output handle and make sure there's room in the buffer
perf_event_max_stack_handlerUsed for sysctl_perf_event_max_stack and* sysctl_perf_event_max_contexts_per_stack.
uprobe_munmapCalled in context of a munmap of a vma.
xol_take_insn_slot- search for a free slot.
padata_do_parallelpadata_do_parallel - padata parallelization function*@ps: padatashell*@padata: object to be parallelized*@cb_cpu: pointer to the CPU that the serialization callback function should* run on. If it's not in the serial cpumask of @pinst* (i
static_key_countThere are similar definitions for the !CONFIG_JUMP_LABEL case in jump_label
static_key_slow_inc
static_key_enable_cpuslocked
static_key_disable_cpuslocked
oom_killer_disablem_killer_disable - disable OOM killer*@timeout: maximum timeout to wait for oom victims in jiffies* Forces all page allocations to fail rather than trigger OOM killer
task_will_free_memChecks whether the given task is dying or exiting and likely to* release its address space. This means that all threads and processes* sharing the same mm have to be killed or exiting.* Caller has to make sure that task->mm is stable (hold task_lock or
page_mappedReturn true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped.
__page_mapcountSlow path of page_mapcount() for compound pages
wait_iff_congestedwait_iff_congested - Conditionally wait for a backing_dev to become uncongested or a pgdat to complete writes*@sync: SYNC or ASYNC IO*@timeout: timeout in jiffies* In the event of a congested backing_dev (any backing_dev) this waits* for up to @timeout
change_pte_range
anon_vma_free
page_expected_stateA bad page could be due to a number of fields. Instead of multiple branches,* try and check multiple fields with one check. The caller must do a detailed* check if necessary.
free_pages_check_bad
check_new_page_bad
swap_use_vma_readahead
swapin_nr_pages
page_trans_huge_map_swapcount
swaps_poll
swaps_open
__frontswap_curr_pages
__frontswap_unuse_pages
__mmu_notifier_registerSame as mmu_notifier_register but here the caller must hold the mmap_sem in* write mode. A NULL mn signals the notifier is being registered for itree* mode.
mmu_notifier_unregisterThis releases the mm_count pin automatically and frees the mm* structure if it was the last user of it. It serializes against* running mmu notifiers with SRCU and against mmu_notifier_unregister* with the unregister lock + SRCU
__mmu_interval_notifier_insert
ksm_test_exitksmd, and unmerge_and_remove_all_rmap_items(), must not touch an mm's* page tables after it has passed through ksm_exit() - which, if necessary,* takes mmap_sem briefly to serialize against them. ksm_exit() does not set
__buffer_migrate_page
shrink_huge_zero_page_count
__split_huge_page_tail
total_mapcount
page_trans_huge_mapcountThis calculates accurately how many mappings a transparent hugepage* has (unlike page_mapcount() which isn't fully accurate)
khugepaged_test_exit
lock_page_memcglock_page_memcg - lock a page->mem_cgroup binding*@page: the page* This function protects unlocked LRU pages from being moved to* another cgroup
__delete_objectMark the object as not allocated and schedule RCU freeing via put_object().
kmemleak_scanScan data sections and all the referenced memory blocks allocated via the* kernel's standard allocators. This function must be called with the* scan_mutex held.
zpool_unregister_driverzpool_unregister_driver() - unregister a zpool implementation
msgctl_info
bio_put_put - release a reference to a bio*@bio: bio to release reference to* Description:* Put a reference to a &struct bio, either one you have gotten with* bio_alloc, bio_get or bio_clone_*. The last put of a bio will free it.
bio_remaining_done
hctx_may_queueFor shared tag users, we track the number of currently active users* and attempt to provide a fair share of the tag depth for each of them.
atomic_inc_belowIncrement 'v', if 'v' is below 'below'. Returns true if we succeeded,* false if 'v' + 1 would be bigger than 'below'.
blkcg_print_stat
blkcg_can_attachWe cannot support shared io contexts, as we have no mean to support* two tasks with the same ioc in two different groups without major rework* of the main cic data structures. For now we allow a task to change
blkcg_scale_delayScale the accumulated delay based on how long it has been since we updated* the delay. We only call this when we are adding delay, in case it's been a* while since we added delay, and when we are checking to see if we need to
blkcg_maybe_throttle_blkgThis is called when we want to actually walk up the hierarchy and check to* see if we need to throttle, and then actually throttle if there is some* accumulated delay. This should only be called upon return to user space so
blk_iolatency_enabled
__blkcg_iolatency_throttle
scale_cookie_changeWe scale the qd down faster than we scale up, so we need to use this helper* to adjust the scale_cookie accordingly so we don't prematurely get* scale_cookie at DEFAULT_SCALE_COOKIE and unthrottle too much
check_scale_changeCheck our parent and see if the scale cookie has changed.
iolatency_check_latencies
blkiolatency_timer_fn
iolatency_pd_init
current_hweight
iocg_activate
iocg_kick_delay
ioc_timer_fn
bfq_update_has_short_ttime
queue_pm_only_show
blk_mq_debugfs_tags_show
hctx_active_show
proc_key_users_show
avc_get_hash_stats
selinux_secmark_enabledselinux_secmark_enabled - Check to see if SECMARK is currently enabled* Description:* This function checks the SECMARK reference counter to see if any SECMARK* targets are currently configured, if the reference counter is greater than
tomoyo_supervisormoyo_supervisor - Ask for the supervisor's decision
tomoyo_read_statmoyo_read_stat - Read statistic data.*@head: Pointer to "struct tomoyo_io_buffer".* Returns nothing.
tomoyo_commit_conditionmoyo_commit_condition - Commit "struct tomoyo_condition".*@entry: Pointer to "struct tomoyo_condition".* Returns pointer to "struct tomoyo_condition" on success, NULL otherwise.* This function merges duplicated entries. This function returns NULL if
tomoyo_try_to_gcmoyo_try_to_gc - Try to kfree() an entry.*@type: One of values in "enum tomoyo_policy_id".*@element: Pointer to "struct list_head".* Returns nothing.* Caller holds tomoyo_policy_lock mutex.
tomoyo_collect_entrymoyo_collect_entry - Try to kfree() deleted elements.* Returns nothing.
tomoyo_get_groupmoyo_get_group - Allocate memory for "struct tomoyo_path_group"/"struct tomoyo_number_group".*@param: Pointer to "struct tomoyo_acl_param".*@idx: Index number.* Returns pointer to "struct tomoyo_group" on success, NULL otherwise.
tomoyo_get_namemoyo_get_name - Allocate permanent memory for string data.*@name: The string to store into the permernent memory.* Returns pointer to "struct tomoyo_path_info" on success, NULL otherwise.
ima_rdwr_violation_checkma_rdwr_violation_check* Only invalidate the PCR for measured files:* - Opening a file for write when already open for read,* results in a time of measure, time of use (ToMToU) error.* - Opening a file for read when already open for write,
ima_check_last_writer
__do_execve_filesys_execve() executes a new program.
inode_add_lruAdd inode to LRU if needed (inode is unused and clean).* Needs inode->i_lock held.
evict_inodesvict_inodes - evict all evictable inodes for a superblock*@sb: superblock to operate on* Make sure that no inodes with zero refcount are retained
invalidate_inodesvalidate_inodes - attempt to free all inodes on a superblock*@sb: superblock to operate on*@kill_dirty: flag to guide handling of dirty inodes* Attempts to free all inodes for a given superblock. If there were any
inode_lru_isolateIsolate the inode from the LRU in preparation for freeing it
__inode_dio_waitDirect i/o helper functions
inode_dio_waitde_dio_wait - wait for outstanding DIO requests to finish*@inode: inode to wait for* Waits for all pending direct I/O requests to finish so that we can* proceed with a truncate or equivalent operation
expand_fdtableExpand the file descriptor table.* This function will allocate a new fdtable and both fd array and fdset, of* the given size.* Return <0 error code on error; 1 on successful completion.
__fget_lightLightweight file lookup - no refcnt increment if fd table isn't shared.* You can use this instead of fget if you satisfy all of the following* conditions:* 1) You must call fput_light before exiting the syscall and returning control* to userspace (i
wb_wait_for_completionwb_wait_for_completion - wait for completion of bdi_writeback_works*@done: target wb_completion* Wait for one or more work items issued to @bdi with their ->done field* set to @done, which should have been initialized with* DEFINE_WB_COMPLETION()
writeback_single_inodeWrite out an inode's dirty pages. Either the caller has an active reference* on the inode or the inode has I_WILL_FREE set.* This function is designed to be called for writing back one inode which* we go e
__brelseDecrement a buffer_head's reference count
__sync_dirty_bufferFor a data-integrity writeout, we need to wait upon any in-progress I/O* and then start new I/O and then wait upon it. The caller must have a ref on* the buffer_head.
buffer_busyry_to_free_buffers() checks if all the buffers on this particular page* are unused, and releases them if so
fsnotify_unmount_inodessnotify_unmount_inodes - an sb is unmounting. handle any watched inodes.*@sb: superblock being unmounted.* Called during unmount with no locks held, so needs to be safe against* concurrent modifiers. We temporarily drop sb->s_inode_list_lock and CAN block.
fsnotify_destroy_groupTrying to get rid of a group. Remove all marks, flush all events and release* the group reference.* Note that another thread calling fsnotify_clear_marks_by_group() may still* hold a ref to the group.
fanotify_add_new_mark
SYSCALL_DEFINE2anotify syscalls
aio_ring_mremap
__get_reqs_available
aio_read_events
__req_need_defer
io_should_wake
io_cqring_waitWait until events become available, if we don't already have some. The* application must reap them itself, as they reside on the shared cq ring.
io_wq_can_queue
io_wqe_enqueue
check_conflicting_openheck_conflicting_open - see if the given file points to an inode that has* an existing open that would conflict with the* desired lease
mb_cache_destroymb_cache_destroy - destroy cache*@cache: the cache to destroy* Free all entries in cache and cache itself. Caller must make sure nobody* (except shrinker) can reach @cache when calling this.
zap_threads
iomap_page_release
iomap_finish_page_writeback
iomap_writepage_mapWe implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide
invalidate_dquotsInvalidate all dquots on the list
dqputPut reference to dquot
dqgetGet reference to dquot* Locking is slightly tricky here. We are guarded from parallel quotaoff()* destroying our dquot by:* a) checking for quota flags under dq_list_lock and* b) getting a reference to dquot before we release dq_list_lock
add_dquot_refThis routine is guarded by s_umount semaphore
atomic_fetch_add_unlessatomic_fetch_add_unless - add unless the number is already a given value*@v: pointer of type atomic_t*@a: the amount to add to v...*@u: ...unless v is equal to u.* Atomically adds @a to @v, so long as @v was not already @u.* Returns original value of @v
atomic_inc_unless_negative
atomic_dec_unless_positive
atomic_dec_if_positive
atomic_long_read
static_key_count
static_key_enable
static_key_disable
osq_is_locked
mm_tlb_flush_pending
mm_tlb_flush_nested
PageTransCompoundMapPageTransCompoundMap is the same as PageTransCompound, but it also* guarantees the primary MMU has the entire compound page mapped* through pmd_trans_huge, which in turn guarantees the secondary MMUs* can also map the entire compound page
refcount_read_read - get a refcount's value*@r: the refcount* Return: the refcount's value
proc_sys_poll_event
page_ref_count
page_count
get_io_context_active取得I/O活跃引用
ioc_task_link
mapping_writably_mappedMight pages of this file have been modified in userspace?* Note that i_mmap_writable counts all VM_SHARED vmas: do_mmap_pgoff* marks vma as VM_SHARED if it is shared, and the file was opened for* writing i
inode_is_open_for_write
i_readcount_dec
compound_mapcount
page_mapcount
skb_cloned缓存是克隆的?
skb_header_clonedskb头是克隆的
rt_genid_ipv4
fnhe_genid
blk_cgroup_congested
blkcg_unuse_delay
blkcg_clear_delay
rht_grow_above_75ht_grow_above_75 - returns true if nelems > 0.75 * table-size*@ht: hash table*@tbl: current table
rht_shrink_below_30ht_shrink_below_30 - returns true if nelems < 0.3 * table-size*@ht: hash table*@tbl: current table
rht_grow_above_100ht_grow_above_100 - returns true if nelems > table-size*@ht: hash table*@tbl: current table
rht_grow_above_max表溢出
sk_rcvqueues_fullTake into account size of receive queue and backlog queue* Do not take into account this skb truesize,* to allow even a single big packet to come.
sk_rmem_alloc_get返回读分配
sock_skb_set_dropcount
reqsk_queue_len
reqsk_queue_len_young
tcp_fast_path_check
tcp_spaceNote: caller must be prepared to deal with negative returns
ib_destroy_usecnt_destroy_usecnt - Called during destruction to check the usecnt*@usecnt: The usecnt atomic*@why: remove reason*@uobj: The uobject that is destroyed* Non-zero usecnts will block destruction unless destruction was triggered by* a ucontext cleanup.
sbq_index_atomic_inc
sbq_wait_ptrsbq_wait_ptr() - Get the next wait queue to use for a &struct* sbitmap_queue.*@sbq: Bitmap queue to wait on.*@wait_index: A counter per "user" of @sbq.
queued_fetch_set_pending_acquire
queued_spin_is_lockedqueued_spin_is_locked - is the spinlock locked?*@lock: Pointer to queued spinlock structure* Return: 1 if it is locked, 0 otherwise
queued_spin_value_unlockedqueued_spin_value_unlocked - is the spinlock structure unlocked?*@lock: queued spinlock structure* Return: 1 if it is unlocked, 0 otherwise* N
queued_spin_is_contendedqueued_spin_is_contended - check if the lock is contended*@lock : Pointer to queued spinlock structure* Return: 1 if lock contended, 0 otherwise
queued_spin_trylockqueued_spin_trylock - try to acquire the queued spinlock*@lock : Pointer to queued spinlock structure* Return: 1 if lock acquired, 0 if failed
queued_read_trylockqueued_read_trylock - try to acquire read lock of a queue rwlock*@lock : Pointer to queue rwlock structure* Return: 1 if lock acquired, 0 if failed
queued_write_trylockqueued_write_trylock - try to acquire write lock of a queue rwlock*@lock : Pointer to queue rwlock structure* Return: 1 if lock acquired, 0 if failed
rcu_check_gp_start_stallThis function checks for grace-period requests that fail to motivate* RCU to come out of its idle mode.
wbt_inflight
selinux_xfrm_enabled
xfrm_state_kern
dqgrab
dquot_is_busy