函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\asm-generic\bitops\instrumented-non-atomic.h Create Date:2022-07-27 06:38:11
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:st_bit - Determine whether a bit is set*@nr: bit number to test*@addr: Address to start counting from

函数原型:static inline bool test_bit(long nr, const volatile unsigned long *addr)

返回类型:bool

参数:

类型参数名称
longnr
const volatile unsigned long *addr
110  kasan_check_read(addr + BIT_WORD(nr), sizeof(long))
111  返回:arch_test_bit(nr, addr)
调用者
名称描述
vsscanfvsscanf - Unformat a buffer into a list of arguments*@buf: input buffer*@fmt: format of buffer*@args: arguments
tag_get
node_get_mark
ida_freeda_free() - Release an allocated ID.*@ida: IDA handle.*@id: Previously allocated ID.* Context: Any context.
bitmap_pos_to_ordmap_pos_to_ord - find ordinal of set bit at given position in bitmap*@buf: pointer to a bitmap*@pos: a bit position in @buf (0 <= @pos < @nbits)*@nbits: number of valid bit positions in @buf* Map the bit at position @pos in @buf (of length @nbits) to the
bitmap_ontomap_onto - translate one bitmap relative to another*@dst: resulting translated bitmap*@orig: original untranslated bitmap*@relmap: bitmap relative to which translated*@bits: number of bits in each of these bitmaps* Set the n-th bit of @dst iff there
kasan_bitops
test_rhltable
is_prime_numbers_prime_number - test whether the given number is prime*@x: the number to test* A prime number is an integer greater than 1 that is only divisible by* itself and 1
__lc_get
irq_poll_schedq_poll_sched - Schedule a run of the iopoll handler*@iop: The parent iopoll structure* Description:* Add this irq_poll structure to the pending poll list and trigger the* raise of the blk iopoll softirq.
irq_poll_softirq
irq_poll_enableq_poll_enable - Enable iopoll on this @iop*@iop: The parent iopoll structure* Description:* Enable iopoll on this @iop. Note that the handler run will not be* scheduled, it will only mark it as active.
objagg_tmp_graph_is_edge
check_cpuCPU检查
update_intr_gate
arch_show_interrupts/proc/interrupts printing for arch specific interrupts
fpu__init_system_early_generic
do_clear_cpu_cap
machine_check_pollPoll for corrected events or events that happened before reset.* Those are just logged through /dev/mcelog.* This is executed in standard interrupt context.* Note: spec recommends to panic for fatal unsignalled* errors here
mce_clear_state
__mc_scan_banks
cmci_discoverEnable CMCI (Corrected Machine Check Interrupt) for available MCE banks* on this CPU. Use the algorithm recommended in the SDM to discover shared* banks.
__cmci_disable_bankCaller must hold the lock on cmci_discover_lock
rdt_bit_usage_showdt_bit_usage_show - Display current usage of resources* A domain is a shared resource that can now be allocated differently
avail_to_resrv_perfctr_nmi_bithecks for a bit availability (hack for oprofile)
find_isa_irq_pinFind the pin to which IRQ[irq] (ISA) is connected
find_isa_irq_apic
irq_polarity
irq_trigger
mp_map_pin_to_irq
IO_APIC_get_PCI_irq_vectorFind a specific PCI IRQ entry.* Not an __init, possibly needed by modules
can_boostReturns non-zero if INSN is boostable.* RIP relative instructions are adjusted at copying time in 64 bits mode
is_revectored
uprobe_init_insn
print_taintedprint_tainted - return a string to represent the kernel taint state.* For individual taint flag meanings, see Documentation/admin-guide/sysctl/kernel.rst* The string is overwritten by the next call to print_tainted(),* but is always NULL terminated.
test_taint
tasklet_kill
SYSCALL_DEFINE5
flush_rcu_worklush_rcu_work - wait for a rwork to finish executing the last queueing*@rwork: the rcu work to flush* Return:* %true if flush_rcu_work() waited for the work to finish execution,* %false if it was already idle.
kthread_should_stopkthread_should_stop - should this kthread return now?* When someone calls kthread_stop() on your kthread, it will be woken* and this will return true. You should then return, and your return* value will be passed through to kthread_stop().
__kthread_should_park
__kthread_parkme
kthread
kthread_unparkkthread_unpark - unpark a thread created by kthread_create().*@k: thread created by kthread_create().* Sets kthread_should_park() for @k to return false, wakes it, and* waits for it to return. If the thread is marked percpu then its
kthread_parkkthread_park - park a thread created by kthread_create()
wake_bit_function
__wait_on_bitTo allow interruptible waiting and asynchronous (i.e. nonblocking)* waiting, the actions of __wait_on_bit() and __wait_on_bit_lock() are* permitted return codes. Nonzero return codes halt waiting and return.
__wait_on_bit_lock
hlock_class
__lock_acquireThis gets called for every mutex_lock*()/spin_lock*() operation
memory_bm_test_bit
irq_finalize_oneshotOneshot interrupts keep the irq line masked until the threaded* handler finished. unmask if the interrupt has not been disabled and* is marked MASKED.
irq_threadInterrupt handler thread
irq_map_generic_chipq_map_generic_chip - Map a generic chip for an irq domain
module_flags_taint
cgroup_events_show
cgroup_createThe returned cgroup is fully initialized including its control mask, but* it isn't associated with its kernfs_node and doesn't have the control* mask applied.
cgroup_destroy_lockedgroup_destroy_locked - the first stage of cgroup destruction*@cgrp: cgroup to be destroyed* css's make use of percpu refcnts whose killing latency shouldn't be* exposed to userland and are RCU protected
cgroup_clone_children_read
cgroup1_show_options
cgroup_propagate_frozenPropagate the cgroup frozen state upwards by the cgroup tree.
cgroup_update_frozenRevisit the cgroup frozen state.* Checks if the cgroup is really frozen and perform all state transitions.
cgroup_leave_frozenConditionally leave frozen/stopped state
cgroup_freezer_migrate_taskAdjust the task state (freeze or unfreeze) and revisit the state of* source and destination cgroups.
cgroup_freeze
is_cpuset_onlinevenient tests for these bits
is_cpu_exclusive
is_mem_exclusive
is_mem_hardwall
is_sched_load_balance
is_memory_migrate
is_spread_page
is_spread_slab
cpuset_css_online
trace_find_filtered_pidrace_find_filtered_pid - check if a pid exists in a filtered_pid list*@filtered_pids: The list of pids to check*@search_pid: The PID to find in @filtered_pids* Returns true if @search_pid is fonud in @filtered_pids, and false otherwis.
prepare_uprobe
install_breakpoint
register_for_each_vma
uprobe_mmapCalled from mmap_region/vma_adjust with mm->mmap_sem acquired.* Currently we ignore all errors and always return 0, the callers* can't handle the failure anyway.
uprobe_munmapCalled in context of a munmap of a vma.
uprobe_dup_mmap
handle_swbpRun handler and ask thread to singlestep.* Ensure all non-fatal signals cannot interrupt thread while it singlesteps.
uprobe_pre_sstep_notifierprobe_pre_sstep_notifier gets called from interrupt context as part of* notifier mechanism. Set TIF_UPROBE flag and indicate breakpoint hit.
filemap_check_errors
filemap_check_and_keep_errors
wake_page_function
wait_on_page_bit_common
oom_badnessm_badness - heuristic function to determine which candidate task to kill*@p: task struct of which task we should calculate*@totalpages: total present RAM allowed for page allocation* The heuristic for determining which task to kill is made to be as simple
oom_evaluate_task
oom_reap_task_mmReaps the address space of the give task.* Returns true on success and false if none or part of the address space* has been reclaimed and the caller should retry later.
oom_reap_task
task_will_free_memChecks whether the given task is dying or exiting and likely to* release its address space. This means that all threads and processes* sharing the same mm have to be killed or exiting.* Caller has to make sure that task->mm is stable (hold task_lock or
shrink_page_listshrink_page_list() returns the number of reclaimed pages
shrink_node
wb_wakeup_delayedThis function is used when the first inode for this wb is marked dirty
release_bdi
vm_lock_anon_vma
vm_lock_mapping
vm_unlock_anon_vma
vm_unlock_mapping
rmqueueAllocate a page from the given zone. Use pcplists for order-0 allocations.
page_alloc_shuffleDepending on the architecture, module parameter parsing may run* before, or after the cache detection
shuffle_show
frontswap_register_opsRegister operations for frontswap
__frontswap_test
ksm_madvise
report_enabled
mm_get_huge_zero_page
mm_put_huge_zero_page
alloc_hugepage_direct_gfpmaskalways: directly stall for all thp allocations* defer: wake kswapd and fail if not immediately available* defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise* fail if not immediately available* madvise: directly stall for
hugepage_vma_check
pagetypeinfo_showmixedcount_print
__dump_page_owner
read_page_owner
init_pages_in_zone
put_z3fold_header
free_handle
free_pages_work
z3fold_compact_pageHas to be called with lock held
do_compact_page
__z3fold_allocrns _locked_ z3fold page header or NULL
z3fold_freez3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides
z3fold_reclaim_pagez3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different
z3fold_mapz3fold_map() - maps the allocation associated with the given handle*@pool: pool in which the allocation resides*@handle: handle associated with the allocation to be mapped* Extracts the buddy number from handle and constructs the pointer to the
z3fold_unmapz3fold_unmap() - unmaps the allocation associated with the given handle*@pool: pool in which the allocation resides*@handle: handle associated with the allocation to be unmapped
z3fold_page_isolate
chunk_map_statsPrints out chunk state. Fragmentation is considered between* the beginning of the chunk to the last allocation.* All statistics are in bytes unless stated otherwise.
elevator_init_mqFor a device queue that has no required features, use the default elevator* settings. Otherwise, use the first elevator available matching the required* features. If no suitable elevator is find or if the chosen elevator
generic_make_request_checks
key_schedule_gcSchedule a garbage collection run.* - time precision isn't particularly important
key_gc_unused_keysGarbage collect a list of unreferenced, detached keys
key_payload_reservekey_payload_reserve - Adjust data quota reservation for the key's payload*@key: The key to make the reservation for
__key_instantiate_and_linkInstantiate a key and link it into the target keyring atomically. Must be* called with the target keyring's semaphore writelocked. The target key's* semaphore need not be locked as instantiation is serialised by* key_construction_mutex.
key_create_or_updatekey_create_or_update - Update or create and instantiate a key.*@keyring_ref: A pointer to the destination keyring with possession flag.*@type: The type of key.*@description: The searchable description for the key.
key_invalidatekey_invalidate - Invalidate a key.*@key: The key to be invalidated.* Mark a key as being invalidated and have it cleaned up immediately. The key* is ignored by all searches and other operations from this point.
find_keyring_by_nameFind a keyring with the specified name
__key_link_beginPreallocate memory so that a key can be linked into to a keyring.
keyctl_revoke_keyRevoke a key.* The key must be grant the caller Write or Setattr permission for this to* work. The key type should give up its quota claim when revoked. The key* and any links to the key will be automatically garbage collected after a
keyctl_invalidate_keyInvalidate a key.* The key must be grant the caller Invalidate permission for this to work.* The key and any links to the key will be automatically garbage collected* immediately.* Keys with KEY_FLAG_KEEP set should not be invalidated.
keyctl_keyring_clearClear the specified keyring, creating an empty process keyring if one of the* special keyring IDs is used.* The keyring must grant the caller Write permission and not have* KEY_FLAG_KEEP set for this to work. If successful, 0 will be returned.
keyctl_keyring_unlinkUnlink a key from a keyring.* The keyring must grant the caller Write permission for this to work; the key* itself need not grant the caller anything. If the last link to a key is* removed then that key will be scheduled for destruction.
keyctl_chown_keyChange the ownership of a key* The key must grant the caller Setattr permission for this to work, though* the key need not be fully instantiated yet. For the UID to be changed, or* for the GID to be changed to a group the caller is not a member of, the
keyctl_set_timeoutSet or clear the timeout on a key.* Either the key must grant the caller Setattr permission or else the caller* must hold an instantiation authorisation token for the key.* The timeout is either 0 to clear the timeout, or a number of seconds from
lookup_user_keyLook up a key ID given us by userspace with a given permissions mask to get* the key it refers to.* Flags can be passed to request that special keyrings be created if referred* to directly, to permit partially constructed keys to be found and to skip
call_sbin_request_keyRequest userspace finish the construction of a key* - execute "/sbin/request-key "
construct_keyCall out to userspace for key construction.* Program failure is ignored in favour of key status.
construct_get_dest_keyringGet the appropriate destination keyring for the request.* The keyring selected is returned with an extra reference upon it which the* caller must release.
request_key_auth_newCreate an authorisation token for /sbin/request-key or whoever to gain* access to the caller's security data.
key_get_instantiation_authkeySearch the current process's keyrings for the authorisation key for* instantiation of a key.
getoptionsan have zero or more token= options
tomoyo_read_domainmoyo_read_domain - Read domain policy.*@head: Pointer to "struct tomoyo_io_buffer".* Caller holds tomoyo_read_lock().
tomoyo_check_aclmoyo_check_acl - Do permission check.*@r: Pointer to "struct tomoyo_request_info".*@check_entry: Callback function to check type specific parameters.* Returns 0 on success, negative value otherwise.* Caller holds tomoyo_read_lock().
tomoyo_assign_domainmoyo_assign_domain - Create a domain or a namespace.*@domainname: The name of domain.*@transit: True if transit to domain found or created.* Returns pointer to "struct tomoyo_domain_info" on success, NULL otherwise.* Caller holds tomoyo_read_lock().
ima_rdwr_violation_checkma_rdwr_violation_check* Only invalidate the PCR for measured files:* - Opening a file for write when already open for read,* results in a time of measure, time of use (ToMToU) error.* - Opening a file for read when already open for write,
process_measurement
ima_parse_bufma_parse_buf() - Parses lengths and data from an input buffer*@bufstartp: Buffer start address.*@bufendp: Buffer end address.*@bufcurp: Pointer to remaining (non-parsed) data.*@maxfields: Length of fields array.
ima_update_xattrma_update_xattr - update 'security.ima' hash value
__clear_close_on_exec
wb_wakeup
wb_queue_work
wb_start_writeback
wb_check_start_all
wb_workfnHandle writeback of dirty data for the device backed by this bdi. Also* reschedules periodically and does kupdated style flushing.
__mark_inode_dirty__mark_inode_dirty - internal function*@inode: inode to mark*@flags: what kind of dirty (i
buffer_io_error
io_worker_handle_work
io_wqe_worker
io_wq_create
check_fileCheck if we support the binfmt* if we do, return the node, else NULL* locking is done in load_misc_binary
create_entryThis registers a new binary format, it recognises the syntax* ':name:type:offset:magic:mask:interpreter:flags'* where the ':' is the IFS, that can be chosen with the first char
entry_statusgeneric stuff
iomap_adjust_read_rangeCalculate the range inside the page that we actually need to read.
iomap_iop_set_range_uptodate
iomap_is_partially_uptodatemap_is_partially_uptodate checks whether blocks within a page are* uptodate or not.* Returns true if all blocks which correspond to a file portion* we want to read within the page are uptodate.
iomap_writepage_mapWe implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide
dquot_dirty
dquot_mark_dquot_dirtyMark dquot dirty in atomic manner, and return it's old dirty flag state
dquot_acquireRead dquot from disk and alloc space for it
dquot_commitWrite dquot to disk
dquot_scan_activeCall callback for every active dquot on given filesystem
dquot_writeback_dquotsWrite all dquot structures to quota files
dqputPut reference to dquot
dqgetGet reference to dquot* Locking is slightly tricky here. We are guarded from parallel quotaoff()* destroying our dquot by:* a) checking for quota flags under dq_list_lock and* b) getting a reference to dquot before we release dq_list_lock
dquot_add_inodes
dquot_add_space
info_idq_free
info_bdq_free
qtree_release_dquotCheck whether dquot should not be deleted. We know we are* the only one operating on dquot (thanks to dq_lock)
test_bit_le
cpumask_test_cpu测试CPU信息
test_ti_thread_flag
PageCompound
PageLocked
PageWaiters
PageError
PageReferenced
PageDirty
PageLRU
PageActive
PageWorkingset
PageSlab
PageSlobFree
PageChecked
PagePinnedXen
PageSavePinned
PageForeign
PageXenRemapped
PageReserved
PageSwapBacked
PagePrivatePrivate page markings that may be used by the filesystem that owns the page* for its own purposes.* - PG_private and PG_private_2 cause releasepage() and co to be invoked
PagePrivate2
PageOwnerPriv1
PageWritebackOnly test-and-set exist for PG_writeback. The unconditional operators are* risky: they bypass page accounting.
PageMappedToDisk
PageReclaimPG_readahead is only used for reads; PG_reclaim is only for writes
PageReadahead
PageUnevictable
PageMlocked
PageHWPoison
PageUptodate
PageHead
PageDoubleMapPageDoubleMap indicates that the compound page is mapped with PTEs as well* as PMDs.* This is required for optimization of rmap operations for THP: we can postpone* per small page mapcount accounting (and its overhead from atomic operations)
PageIsolated
wait_on_bitwait_on_bit - wait for a bit to be cleared*@word: the word being waited on, a kernel virtual address*@bit: the bit of the word being waited on*@mode: the task state to sleep in* There is a standard hashed waitqueue table for generic use. This
wait_on_bit_iowait_on_bit_io - wait for a bit to be cleared*@word: the word being waited on, a kernel virtual address*@bit: the bit of the word being waited on*@mode: the task state to sleep in* Use the standard hashed waitqueue table to wait for a bit* to be cleared
wait_on_bit_timeoutwait_on_bit_timeout - wait for a bit to be cleared or a timeout elapses*@word: the word being waited on, a kernel virtual address*@bit: the bit of the word being waited on*@mode: the task state to sleep in*@timeout: timeout, in jiffies* Use the standard
wait_on_bit_actionwait_on_bit_action - wait for a bit to be cleared*@word: the word being waited on, a kernel virtual address*@bit: the bit of the word being waited on*@action: the function used to sleep, which may take special actions*@mode: the task state to sleep in*
info_dirty
__transparent_hugepage_enabled be used on vmas which are known to support THP.* Use transparent_hugepage_enabled otherwise
mapping_unevictable
mapping_exiting
mapping_use_writeback_tags
blk_queue_zone_is_seq
blk_req_zone_is_write_locked
cgroup_task_freeze
close_on_exec
fd_is_open
task_no_new_privs
task_spread_page
task_spread_slab
task_spec_ssb_disable
task_spec_ssb_noexec
task_spec_ssb_force_disable
task_spec_ib_disable
task_spec_ib_force_disable
scm_recv
napi_disable_pending
napi_enable允许NAPI调度
napi_synchronize等待NAPI运行完成
netif_tx_queue_stopped
netif_running测试是否运行中
netif_carrier_ok测试设备正常
netif_dormant测试设备正常
netif_device_present设备可用或删除
sock_flag
tty_io_nonblock
tty_io_error
tty_throttled
tty_port_cts_enabledIf the cts flow control is enabled, return true.
tty_port_active
tty_port_check_carrier
tty_port_suspended
tty_port_initialized
tty_port_kopened
fscache_cookie_enabled
fscache_object_is_live
fscache_object_is_available
fscache_cache_is_broken
inet_is_local_reserved_port
sbitmap_test_bit
cpu_has_vmxVMX functions:
ksm_fork
ksm_exit
mm_is_oom_victimUse this helper if tsk->mm != mm and the victim mm needs a special* handling. This is guaranteed to stay true after once set.
check_stable_address_spaceChecks whether a page fault on the given mm is still reliable
khugepaged_fork
khugepaged_exit
khugepaged_enter
xprt_connected
xprt_connecting
xprt_bound
wb_has_dirty_io
writeback_in_progresswriteback_in_progress - determine whether there is writeback in progress*@wb: bdi_writeback of interest* Determine whether there is writeback waiting to be handled against a* bdi_writeback.
NFS_STALE
notify_on_release
__event_trigger_test_discardHelper function for event_trigger_unlock_commit{_regs}().* If there are event triggers attached to this event that requires* filtering against its fields, then they wil be called as the* entry already holds the field information of the current event.
blk_mq_hctx_stopped
blk_mq_sched_needs_restart
page_is_young
page_is_idle
dqgrab
dquot_is_busy
watchdog_activeUse the following function to check whether or not the watchdog is active
watchdog_hw_runningUse the following function to check whether or not the hardware watchdog* is running