函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\spinlock.h Create Date:2022-07-27 06:39:17
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:加自旋锁

函数原型:static __always_inline void spin_lock(spinlock_t *lock)

返回类型:void

参数:

类型参数名称
spinlock_t *lock
338  raw_spin_lock( & rlock)
调用者
名称描述
_atomic_dec_and_lockThis is an implementation of the notion of "decrement a* reference count, and return locked if it decremented to zero"
kobj_kset_joinadd the kobject to its kset's list
kobj_kset_leavemove the kobject from its kset's list
kset_find_objkset_find_obj() - Search for object in kset.*@kset: kset we're looking in.*@name: object's name.* Lock kset via @kset->subsys, and iterate over @kset->list,* looking for a matching kobject. If matching object is found
kobj_ns_type_register
kobj_ns_type_registered
kobj_ns_current_may_mount
kobj_ns_grab_current
kobj_ns_netlink
kobj_ns_initial
kobj_ns_drop
add_head
add_tail
klist_add_behind当前节点后添加并初始化klist_node
klist_add_before当前节点前添加并初始化klist_node
klist_release
klist_put
klist_remove减少节点的引用计数并等待它移除
lockref_get无条件增量引用计数
lockref_get_not_zero 除非计数为0或死亡
lockref_put_not_zerolockref_put_not_zero - Decrements count unless count <= 1 before decrement*@lockref: pointer to lockref structure* Return: 1 if count updated successfully or 0 if count would become zero
lockref_get_or_lock 除非计数为0或死亡
lockref_put_or_lock 计数大于0则递减
lockref_get_not_dead 有引用则递增计数
rhashtable_rehash_table
rhashtable_walk_enterhashtable_walk_enter - Initialise an iterator*@ht: Table to walk over*@iter: Hash table Iterator* This function prepares a hash table walk.* Note that if you restart a walk after rhashtable_walk_stop you* may see the same object twice
rhashtable_walk_exit释放迭代器
rhashtable_walk_start_checkhashtable_walk_start_check - Start a hash table walk*@iter: Hash table iterator* Start a hash table walk at the current iterator position. Note that we take* the RCU lock in all cases including when we return an error. So you must
rhashtable_walk_stop哈希表作用完成
refcount_dec_and_lock_dec_and_lock - return holding spinlock if able to decrement* refcount to 0*@r: the refcount*@lock: the spinlock to be locked* Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to* decrement when saturated at REFCOUNT_SATURATED
kunit_alloc_and_get_resource
kunit_resource_remove
kunit_cleanup
string_stream_vadd
string_stream_clear
string_stream_get_string
gen_pool_add_ownergen_pool_add_owner- add a new chunk of special memory to the pool*@pool: pool to add new memory chunk to*@virt: virtual starting address of memory chunk to add to pool*@phys: physical starting address of memory chunk to add to pool*@size: size in bytes of
textsearch_registerxtsearch_register - register a textsearch module*@ops: operations lookup table* This function must be called by textsearch modules to announce* their presence
textsearch_unregisterxtsearch_unregister - unregister a textsearch module*@ops: operations lookup table* This function must be called by textsearch modules to announce* their disappearance for examples when the module gets unloaded
machine_real_restart
queue_event
suspend
do_release
do_open
__mmput
copy_fs复制文件系统
copy_process创建进程
ksys_unshareshare allows a process to 'unshare' part of the process* context which was originally shared using clone. copy_** functions used by do_fork() cannot be used here directly* because they modify an inactive task_struct that is being* constructed
do_oops_enter_exitIt just happens that oops_enter() and oops_exit() are identically* implemented...
__exit_signalThis function expects the tasklist_lock write-locked.
free_resource
alloc_resource
__ptrace_unlink__ptrace_unlink - unlink ptracee and restore its execution state*@child: ptracee to be unlinked* Remove @child from the ptrace list, move it back to the original parent,* and restore the execution state so that it conforms to the group stop* state
ptrace_attach
ignoring_childrenCalled with irqs disabled, returns true if childs should reap themselves.
prctl_set_mm
call_usermodehelper_exec_asyncThis is the task which runs the usermode application
proc_cap_handler
try_to_grab_pendingry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any
__queue_work
pool_mayday_timeout
rescuer_threadscuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function
kmalloc_parameter
maybe_kfree_parameterDoes nothing if parameter wasn't kmalloced above.
__kthread_create_on_node
kthreadd
__cond_resched_lock__cond_resched_lock() - if a reschedule is pending, drop the given lock,* call schedule, and on return reacquire the lock
do_wait_intrNote! These two wait functions are entered with the* case), so there is no race with testing the wakeup* condition in the caller before they add the wait* entry to the wake queue.
ww_mutex_set_context_fastpathAfter acquiring lock with fastpath, where we do not hold wait_lock, set ctx* and wake up any waiters so they can recheck.
__mutex_lock_commonLock a mutex (possibly interruptible), slowpath:
__mutex_unlock_slowpath
torture_spin_lock_write_lock
kcmp_epoll_target
posix_timer_add
SYSCALL_DEFINE1Delete a POSIX.1b interval timer.
run_posix_cpu_timersThis is called from the timer interrupt handler. The irq handler has* already updated our counts. We need to check if any timers fire now.* Interrupts are disabled.
double_lock_hbExpress the locking dependencies for lockdep:
futex_wakeWake up waiters matching bitset queued on this futex (uaddr).
queue_lockThe key must be already stored in q->key.
unqueue_mequeue_me() - Remove the futex_q from its futex_hash_bucket*@q: The futex_q to unqueue* The q->lock_ptr must not be held by the caller
fixup_pi_state_owner
futex_lock_piUserspace tried a 0 -> TID atomic transition of the futex value* and failed. The kernel side here does the whole locking operation:* if there are waiters then it will block as a consequence of relying* on rt-mutexes, it does PI, etc
futex_unlock_piUserspace attempted a TID -> 0 atomic transition, and failed.* This is the in-kernel slowpath: we look up the PI state (if any),* and do the rt-mutex unlock.
futex_wait_requeue_piex_wait_requeue_pi() - Wait on uaddr and take uaddr2*@uaddr: the futex we initially wait on (non-pi)*@flags: futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc
cgroup_post_forkgroup_post_fork - called on a new task after adding it to the task list*@child: the task in question* Adds the task to the list running through its css_set if necessary and* call the subsystem fork() callbacks
cgroup_release_agent_write
cgroup_release_agent_show
cgroup1_show_options
cgroup1_reconfigure
cgroup_leave_frozenConditionally leave frozen/stopped state
fmeter_markeventProcess any previous ticks, then bump cnt by one (times scale).
fmeter_getrateProcess any previous ticks, then return current value.
untag_chunk
create_chunkCall with group->mark_mutex held, releases it
tag_chunkhe first tagged inode becomes root of tree
prune_tree_chunksRemove tree from chunks. If 'tagged' is set, remove tree only from tagged* chunks. The function expects tagged chunks are all at the beginning of the* chunks list.
trim_markedrim the uncommitted chunks from tree
audit_remove_tree_rulealled with audit_filter_mutex
audit_trim_trees
audit_add_tree_rulealled with audit_filter_mutex
audit_tag_tree
evict_chunkHere comes the stuff asynchronous to auditctl operations
audit_tree_freeing_mark
kcov_remote_reset
kcov_task_exit
kcov_mmap
kcov_ioctl_locked
kcov_ioctl
kcov_remote_startkcov_remote_start() and kcov_remote_stop() can be used to annotate a section* of code in a kernel background thread to allow kcov to be used to collect* coverage from that part of code
kcov_remote_stopSee the comment before kcov_remote_start() for usage details.
kgdb_register_io_modulekgdb_register_io_module - register KGDB IO module*@new_dbg_io_ops: the io ops vector* Register it with the KGDB core.
kgdb_unregister_io_modulekkgdb_unregister_io_module - unregister KGDB IO module*@old_dbg_io_ops: the io ops vector* Unregister it with the KGDB core.
remove_event_file_dir
bpf_task_fd_query
dev_map_alloc
dev_map_free
bq_flush_to_queue
find_uprobeFind a uprobe corresponding to a given inode:offset* Acquires uprobes_treelock
insert_uprobeAcquire uprobes_treelock.* Matching uprobe already exists in rbtree;* increment (access refcount) and return the matching uprobe.* No matching uprobe; insert the uprobe in rb_tree;* get a double refcount (access + creation) and return NULL.
delete_uprobeThere could be threads that have already hit the breakpoint. They* will recheck the current insn and restart if find_uprobe() fails.* See find_active_uprobe().
build_probe_listFor a given range in vma, build a list of probes that need to be inserted.
vma_has_uprobes
padata_parallel_worker
padata_do_parallelpadata_do_parallel - padata parallelization function*@ps: padatashell*@padata: object to be parallelized*@cb_cpu: pointer to the CPU that the serialization callback function should* run on. If it's not in the serial cpumask of @pinst* (i
padata_find_nextpadata_find_next - Find the next object that needs serialization
padata_reorder
padata_serial_worker
padata_do_serialpadata_do_serial - padata serialization function*@padata: object to be serialized.* padata_do_serial must be called for every parallelized object.* The serialization callback function will run with BHs off.
file_check_and_advance_wb_errle_check_and_advance_wb_err - report wb error (if any) that was previously* and advance wb_err to current one*@file: struct file on which the error is being reported* When userland calls fsync (or something like nfsd does the equivalent), we* want to
oom_reaper
wake_oom_reaper
generic_fadvisePOSIX_FADV_WILLNEED could set PG_Referenced, and POSIX_FADV_NOREUSE could* deactivate the pages and clear PG_Referenced.
domain_update_bandwidth
balance_dirty_pagesalance_dirty_pages() must be called by processes which are generating dirty* data
get_cmdlineget_cmdline() - copy the cmdline value to a buffer.*@task: the task whose cmdline value to copy.*@buffer: the buffer to copy to.*@buflen: the length of the buffer. Larger cmdline values are truncated* to this length.
list_lru_add
list_lru_del
list_lru_walk_one
list_lru_walk_node
__pte_alloc_kernel
copy_one_ptepy one vm_area from one task to the other. Assumes the page tables* already present in the new task to be cleared in the whole range* covered by this vma.
do_numa_page
handle_pte_faultThese routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)
user_shm_lock
user_shm_unlock
expand_downwardsvma is the first one with address < vma->vm_start. Have to extend vma.
map_pte
page_vma_mapped_walkpage_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma
__anon_vma_prepare__anon_vma_prepare - attach an anon_vma to a memory region*@vma: the memory region in question* This makes sure the memory mapping described by 'vma' has* an 'anon_vma' attached to it, so that we can associate the
try_to_unmap_one@arg: enum ttu_flags will be passed to this argument
free_vmap_areaFree a region of KVA allocated by alloc_vmap_area
alloc_vmap_areaAllocate a region of KVA of the specified size and alignment, within the* vstart and vend.
__purge_vmap_area_lazyPurges all lazily-freed vmap areas.
free_vmap_area_noflushFree a vmap area, caller ensuring that the area has been unmapped* and flush_cache_vunmap had been called for the correct range* previously.
find_vmap_area
new_vmap_blockw_vmap_block - allocates new vmap_block and occupies 2^order pages in this* block
free_vmap_block
purge_fragmented_blocks
vb_alloc
vb_free
_vm_unmap_aliases
setup_vmalloc_vm
remove_vm_areamove_vm_area - find and remove a continuous kernel virtual area*@addr: base address* Search for the kernel VM area starting at @addr, and remove it.* This function returns the found VM area, but using it is NOT safe
vreadvread() - read vmalloc area in a safe way.*@buf: buffer for reading data*@addr: vm address.*@count: number of bytes to be read.* This function checks that addr is a valid vmalloc'ed area, and* copy data from that area to a given buffer
vwritevwrite() - write vmalloc area in a safe way.*@buf: buffer for source data*@addr: vm address.*@count: number of bytes to be read.* This function checks that addr is a valid vmalloc'ed area, and* copy data from a buffer to the given addr
s_start
free_pcppages_bulkFrees a number of pages from the PCP lists* Assumes all pages on list are in same zone, and of same order.* count is the number of pages to free.* If the zone was previously in an "all pages pinned" state then look to
free_one_page
rmqueue_bulkObtain a specified number of elements from the buddy allocator, all under* a single hold of the lock, for efficiency. Add them to the supplied list.* Returns the number of new pages which were placed at *list.
__build_all_zonelists
setup_per_zone_wmarkssetup_per_zone_wmarks - called when min_free_kbytes changes* or when memory is hot-{added|removed}* Ensures that the watermark[min,low,high] values for each zone are set* correctly with respect to min_free_kbytes.
lock_cluster
lock_cluster_or_swap_infoDetermine the locking method in use for this device. Return* swap_cluster_info if SSD-style cluster-based locking is in place.
swap_do_scheduled_discardDoing discard actually. After a cluster discard is finished, the cluster* will be added to free cluster list. caller should hold si->lock.
swap_discard_work
del_from_avail_list
add_to_avail_list
scan_swap_map_slots
get_swap_pages
get_swap_page_of_typeThe only caller of this function is now suspend routine
swap_info_get
swap_info_get_cont
put_swap_pageCalled after dropping swapcache to decrease refcnt to swap entries.
try_to_unuseIf the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false
drain_mmlistAfter a successful try_to_unuse, if no swap is now in use, we know* we can empty the mmlist. swap_lock must be held on entry and exit.* Note that mmlist_lock nests inside swap_lock, and an mm must be
enable_swap_info
reinsert_swap_info
has_usable_swap
SYSCALL_DEFINE1
alloc_swap_info
SYSCALL_DEFINE2
si_swapinfo
add_swap_count_continuationadd_swap_count_continuation - called when a swap count is duplicated* beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's* page of the original vmalloc'ed swap_map, to hold the continuation count
swap_count_continuedswap_count_continued - when the original swap_map count is incremented* from SWAP_MAP_MAX, check if there is already a continuation page to carry* into, carry if so, or else fail until a new continuation page is allocated;* when the original swap_map
mem_cgroup_throttle_swaprate
frontswap_register_opsRegister operations for frontswap
frontswap_shrinkFrontswap, like a true swap device, may unnecessarily retain pages* under certain circumstances; "shrink" frontswap is essentially a* "partial swapoff" and works by calling try_to_unuse to attempt to* unuse enough frontswap pages to attempt to -- subject
frontswap_curr_pagesCount and return the number of frontswap pages across all* swap devices. This is exported so that backend drivers can* determine current usage without reading debugfs.
__zswap_pool_empty
__zswap_param_setval must be a null-terminated string
zswap_writeback_entry
zswap_frontswap_storeattempts to compress and store an single page
zswap_frontswap_loadrns 0 if the page was successfully decompressed* return -1 on entry not found or error
zswap_frontswap_invalidate_pages an entry in zswap
zswap_frontswap_invalidate_areas all zswap entries for the given swap type
hugepage_put_subpool
hugepage_subpool_get_pagesSubpool accounting for allocating and reserving pages
hugepage_subpool_put_pagesSubpool accounting for freeing and unreserving pages.* Return the number of global page reservations that must be dropped.* The return value may only be different than the passed value (delta)* in the case where a subpool minimum size must be maintained.
region_addAdd the huge page range represented by [f, t) to the reserve* map
region_chgExamine the existing reserve map and determine how many* huge pages in the specified range [f, t) are NOT currently* represented. This routine is called before a subsequent* call to region_add that will actually modify the reserve
region_abortAbort the in progress add operation. The adds_in_progress field* of the resv_map keeps track of the operations in progress between* calls to region_chg and region_add. Operations are sometimes* aborted after the call to region_chg
region_delDelete the specified range [f, t) from the reserve map. If the* t parameter is LONG_MAX, this indicates that ALL regions after f* should be deleted. Locate the regions which intersect [f, t)* and either trim, delete or split the existing regions.
region_countCount and return the number of huge pages in the reserve map* that intersect with the range [f, t).
__free_huge_page
prep_new_huge_page
dissolve_free_huge_pageDissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
alloc_surplus_huge_pageAllocates a fresh surplus page from the page allocator.
alloc_huge_page_nodepage migration callback function
alloc_huge_page_nodemaskpage migration callback function
gather_surplus_pagesIncrease the hugetlb pool such that it can accommodate a reservation* of size 'delta'.
alloc_huge_page
set_max_huge_pages
nr_overcommit_hugepages_store
hugetlb_overcommit_handler
hugetlb_acct_memoryForward declaration
hugetlb_cowHugetlb_cow() should be called with page lock of the original hugepage held.* Called with hugetlb_instantiation_mutex held and pte_page locked so we* cannot race with other handlers or page migration.
huge_add_to_page_cache
hugetlb_mcopy_atomic_pteUsed by userfaultfd UFFDIO_COPY. Based on mcopy_atomic_pte with* modifications for huge pages.
hugetlb_unreserve_pages
follow_huge_pmd
isolate_huge_page
putback_active_hugepage
move_hugetlb_state
mn_itree_inv_start_range
mn_itree_inv_end
mmu_interval_read_beginmmu_interval_read_begin - Begin a read side critical section against a VA* range* mmu_iterval_read_begin()/mmu_iterval_read_retry() implement a* collision-retry scheme similar to seqcount for the VA range under mni
mn_hlist_releaseThis function can't run concurrently against mmu_notifier_register* because mm->mm_users > 0 during mmu_notifier_register and exit_mmap* runs with mm_users == 0
__mmu_notifier_registerSame as mmu_notifier_register but here the caller must hold the mmap_sem in* write mode. A NULL mn signals the notifier is being registered for itree* mode.
find_get_mmu_notifier
mmu_notifier_unregisterThis releases the mm_count pin automatically and frees the mm* structure if it was the last user of it. It serializes against* running mmu notifiers with SRCU and against mmu_notifier_unregister* with the unregister lock + SRCU
mmu_notifier_putmmu_notifier_put - Release the reference on the notifier*@mn: The notifier to act on* This function must be paired with each mmu_notifier_get(), it releases the* reference obtained by the get. If this is the last reference then process
__mmu_interval_notifier_insert
mmu_interval_notifier_removemmu_interval_notifier_remove - Remove a interval notifier*@mni: Interval notifier to unregister* This function must be paired with mmu_interval_notifier_insert()
scan_get_next_rmap_item
__ksm_enter
__ksm_exit
cache_free_pfmemalloc
__drain_alien_cache
__cache_free_alien
do_drain
cache_grow_end
cache_alloc_pfmemalloc
cache_alloc_refill
____cache_alloc_nodeA interface to enable slab creation on nodeid
cache_flusharray
get_partial_nodeTry to allocate a partial slab from a specific node.
deactivate_slabRemove the cpu slab
__migration_entry_waitSomething used the pte of a page under migration. We need to* get to the page and wait until migration is finished.* When we return from this function the fault will be retried.
__buffer_migrate_page
do_huge_pmd_wp_page
do_huge_pmd_numa_pageNUMA hinting page fault entry point for trans huge pmds
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
__khugepaged_enter
__khugepaged_exit
__collapse_huge_page_copy
collapse_huge_page
khugepaged_scan_mm_slot
khugepaged_do_scan
khugepaged
mem_cgroup_under_moveA routine for checking "mem" is under move_account() or not.* Checking a cgroup is mc.from or mc.to or under hierarchy of* moving cgroups. This is for waiting at high-memory pressure* caused by "move".
mem_cgroup_oom_trylockCheck OOM-Killer is already running under our hierarchy.* If someone is running, return false.
mem_cgroup_oom_unlock
mem_cgroup_mark_under_oom
mem_cgroup_unmark_under_oom
mem_cgroup_oom_notify_cb
mem_cgroup_oom_register_event
mem_cgroup_oom_unregister_event
memcg_event_wakeGets called on EPOLLHUP on eventfd when user closes it.* Called with wqh->lock held and interrupts disabled.
memcg_write_event_controlDO NOT USE IN NEW FILES.* Parse input and register new cgroup event handler.* Input must be in format ' '.* Interpretation of args is defined by control file implementation.
mem_cgroup_css_offline
mem_cgroup_clear_mc
mem_cgroup_can_attach
vmpressure_work_fn
vmpressurevmpressure() - Account memory pressure through scanned/reclaimed ratio*@gfp: reclaimer's gfp mask*@memcg: cgroup memory controller handle*@tree: legacy subtree mode*@scanned: number of pages scanned*@reclaimed: number of pages reclaimed* This function
hugetlb_cgroup_css_offlineForce the hugetlb cgroup to empty the hugetlb resources by moving them to* the parent cgroup.
hugetlb_cgroup_migratehugetlb_lock will make sure a parallel cgroup rmdir won't happen* when we migrate hugepages
zpool_register_driverzpool_register_driver() - register a zpool implementation.*@driver: driver to register
zpool_unregister_driverzpool_unregister_driver() - unregister a zpool implementation
zpool_get_driverhis assumes @type is null-terminated.
zpool_create_poolzpool_create_pool() - Create a new zpool*@type: The type of the zpool to create (e.g. zbud, zsmalloc)*@name: The name of the zpool (e.g. zram0, zswap)*@gfp: The GFP flags to use when allocating the pool.*@ops: The optional ops callback.
zpool_destroy_poolzpool_destroy_pool() - Destroy a zpool*@zpool: The zpool to destroy.* Implementations must guarantee this to be thread-safe,* however only when destroying different pools. The same* pool should only be destroyed once, and should not be used
zbud_alloczbud_alloc() - allocates a region of a given size*@pool: zbud pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt to
zbud_freezbud_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by zbud_alloc()* In the case that the zbud page in which the allocation resides is
zbud_reclaim_pagezbud_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* zbud reclaim is different from
zs_malloczs_malloc - Allocate block of given size from pool.*@pool: pool to allocate from*@size: size of block to allocate*@gfp: gfp flags when allocating object* On success, handle to the allocated object is returned,* otherwise 0.
zs_free
__zs_compact
z3fold_page_lockLock a z3fold page
__release_z3fold_page
release_z3fold_page_locked_list
free_pages_work
add_to_unbuddiedAdd to the appropriate unbuddied list
do_compact_page
__z3fold_allocrns _locked_ z3fold page header or NULL
z3fold_allocz3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt
z3fold_freez3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides
z3fold_reclaim_pagez3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different
z3fold_page_isolate
z3fold_page_migrate
z3fold_page_putback
cma_add_to_cma_mem_list
cma_get_entry_from_list
ipc_addidpc_addid - add an ipc identifier*@ids: ipc identifier set*@new: new ipc permission set*@limit: limit for the number of used ids* Add an entry 'new' to the ipc ids idr
complexmode_enterEnter the mode suitable for non-simple operations:* Caller must own sem_perm.lock.
sem_lockIf the request contains only one semaphore operation, and there are* no complex transactions pending, lock only the semaphore involved
freearyFree a semaphore set. freeary() is called with sem_ids.rwsem locked* as a writer and the spinlock for this semaphore set hold. sem_ids.rwsem* remains locked on exit.
find_alloc_undod_alloc_undo - lookup (and if not present create) undo array*@ns: namespace*@semid: semaphore array id* The function looks up (and if not present creates) the undo structure.* The size of the undo structure depends on the size of the semaphore
exit_semadd semadj values to semaphores, free undo structures
get_ns_from_inode
mqueue_get_inode
mqueue_evict_inode
mqueue_create_attr
mqueue_read_fileThis is routine for system read from queue file
mqueue_flush_file
mqueue_poll_file
wq_sleepPuts current task to sleep. Caller must hold queue lock. After return* lock isn't held.
do_mq_timedsend
do_mq_timedreceive
do_mq_notifyNotes: the case when user wants us to deregister (with NULL as pointer)* and he isn't currently owner of notification, will be silently discarded.* It isn't explicitly defined in the POSIX.
do_mq_getsetattr
bio_alloc_rescue
punt_bios_to_rescuer
elevator_get
elv_register
elv_unregister
elevator_get_by_featuresGet the first elevator providing the features required by the request queue.* Default to "none" if no matching elevator is found.
elv_iosched_show
ioc_create_icq_create_icq - create and link io_cq*@ioc: io_context of interest*@q: request_queue of interest*@gfp_mask: allocation mask* Make sure io_cq linking @ioc and @q exists
flush_busy_ctx
dispatch_rq_from_ctx
blk_mq_dispatch_wake
blk_mq_mark_tag_waitMark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting.
blk_mq_dispatch_rq_listReturns true if we did some work AND can potentially do more.
blk_mq_request_bypass_insertShould only be used carefully, when the caller knows we want to* bypass a potential IO scheduler on the target device.
blk_mq_insert_requests
blk_mq_hctx_notify_dead'cpu' is going away. splice any existing rq_list entries from this* software queue to the hw queue dispatch list, and ensure that it* gets run.
blk_mq_exit_hctx
blk_mq_alloc_and_init_hctx
blk_stat_add_callback
blk_stat_remove_callback
blk_stat_enable_accounting
blk_mq_sched_dispatch_requests
__blk_mq_sched_bio_merge
blk_mq_sched_bypass_insert
blk_mq_sched_insert_request
blkg_createIf @new_blkg is %NULL, this function tries to allocate a new one as* necessary using %GFP_NOWAIT. @new_blkg is always consumed on return.
blkg_destroy_alllkg_destroy_all - destroy all blkgs associated with a request_queue*@q: request_queue of interest* Destroy all blkgs associated with @q.
iolatency_clear_scaling
ioc_timer_fn
ioc_pd_free
ioc_weight_write
dd_dispatch_requestOne confusing aspect here is that we get called for a specific* hardware queue, but we may return a request that is for a* different hardware queue. This is because mq-deadline has shared* state for all hardware queues, in terms of sorting, FIFOs, etc.
dd_bio_merge
dd_insert_requests
kyber_bio_merge
kyber_insert_requests
flush_busy_kcq
kyber_dispatch_request
hctx_dispatch_start
ctx_default_rq_list_start
ctx_read_rq_list_start
ctx_poll_rq_list_start
key_gc_unused_keysGarbage collect a list of unreferenced, detached keys
key_garbage_collectorReaper for unused keys.
key_user_lookupGet the key quota record for a user, allocating a new record if one doesn't* already exist.
key_alloc_serialAllocate a serial number for a key. These are assigned randomly to avoid* security issues through covert channel problems.
key_allockey_alloc - Allocate a key of the specified type.*@type: The type of key to allocate.*@desc: The key description to allow the key to be searched out.*@uid: The owner of the new key.*@gid: The group ID for the new key's group permissions.
key_payload_reservekey_payload_reserve - Adjust data quota reservation for the key's payload*@key: The key to make the reservation for
key_lookupFind a key by its serial number.
keyctl_chown_keyChange the ownership of a key* The key must grant the caller Setattr permission for this to work, though* the key need not be fully instantiated yet. For the UID to be changed, or* for the GID to be changed to a group the caller is not a member of, the
proc_keys_start
proc_key_users_start
inode_free_security
sb_finish_set_opts
inode_doinit_with_dentry
flush_unauthorized_filesDerived from fs/exec.c:flush_old_files.
selinux_inode_post_setxattr
selinux_inode_setsecurity
selinux_task_to_inode
selinux_socket_accept
selinux_inode_invalidate_secctx
tomoyo_write_log2moyo_write_log2 - Write an audit log.*@r: Pointer to "struct tomoyo_request_info".*@len: Buffer size needed for @fmt and @args.*@fmt: The printf()'s format string.*@args: va_list structure for @fmt.* Returns nothing.
tomoyo_read_logmoyo_read_log - Read an audit log.*@head: Pointer to "struct tomoyo_io_buffer".* Returns nothing.
tomoyo_write_profilemoyo_write_profile - Write profile table.*@head: Pointer to "struct tomoyo_io_buffer".* Returns 0 on success, negative value otherwise.
tomoyo_supervisormoyo_supervisor - Ask for the supervisor's decision
tomoyo_find_domain_by_qid
tomoyo_read_querymoyo_read_query - Read access requests which violated policy in enforcing mode.*@head: Pointer to "struct tomoyo_io_buffer".
tomoyo_write_answermoyo_write_answer - Write the supervisor's decision.*@head: Pointer to "struct tomoyo_io_buffer".* Returns 0 on success, -EINVAL otherwise.
tomoyo_struct_used_by_io_buffermoyo_struct_used_by_io_buffer - Check whether the list element is used by /sys/kernel/security/tomoyo/ users or not.*@element: Pointer to "struct list_head".* Returns true if @element is used by /sys/kernel/security/tomoyo/ users,* false otherwise.
tomoyo_name_used_by_io_buffermoyo_name_used_by_io_buffer - Check whether the string is used by /sys/kernel/security/tomoyo/ users or not.*@string: String to check.* Returns true if @string is used by /sys/kernel/security/tomoyo/ users,* false otherwise.
tomoyo_gc_threadmoyo_gc_thread - Garbage collector thread function.*@unused: Unused.* Returns 0.
tomoyo_notify_gcmoyo_notify_gc - Register/unregister /sys/kernel/security/tomoyo/ users.*@head: Pointer to "struct tomoyo_io_buffer".*@is_register: True if register, false if unregister.* Returns nothing.
multi_transaction_setdoes not increment @new's count
multi_transaction_read
aa_get_buffer
aa_put_buffer
destroy_buffers
update_file_ctx
revalidate_tty
yama_relation_cleanup
yama_ptracer_addyama_ptracer_add - add/replace an exception for this tracer/tracee pair*@tracer: the task_struct of the process doing the ptrace*@tracee: the task_struct of the process to be ptraced* Each tracee can have, at most, one tracer registered. Each time this
loadpin_read_file
ima_init_template_list
restore_template_fmt
generic_file_llseek_sizegeneric_file_llseek_size - generic llseek implementation for regular files*@file: file structure to seek on*@offset: file offset to seek to*@whence: type of seek*@size: max size of this file in file system*@eof: offset used for SEEK_END position* This is
put_superput_super - drop a temporary reference to superblock*@sb: superblock in question* Drops a temporary reference, frees superblock if there's no* references left.
generic_shutdown_supergeneric_shutdown_super - common helper for ->kill_sb()*@sb: superblock to kill* generic_shutdown_super() does all fs-independent work on superblock* shutdown
sget_fcsget_fc - Find or create a superblock*@fc: Filesystem context
sget查找或创建超级块
__iterate_supers
iterate_supersrate_supers - call function for all active superblocks*@f: function to call*@arg: argument to pass to it* Scans the superblock list and calls given function, passing it* locked superblock and given argument.
iterate_supers_typerate_supers_type - call function for superblocks of given type*@type: fs type*@f: function to call*@arg: argument to pass to it* Scans the superblock list and calls given function, passing it* locked superblock and given argument.
__get_super
get_active_superget_active_super - get an active reference to the superblock of a device*@bdev: device to get the superblock for* Scans the superblock list and finds the superblock of the file system* mounted on the device given. Returns the superblock with an active
user_get_super
chrdev_openCalled every time a character special file is opened
cd_forget
cdev_purge
inode_add_bytes
inode_sub_bytes
inode_get_bytes
de_threadThis function makes sure the current process has its own signal table,* so that flush_signal_handlers can later reset the handlers without* disturbing other processes. (Other processes might share the signal* table via the CLONE_SIGHAND option to clone().)
check_unsafe_execdetermine how safe it is to execute the proposed program* - the caller must hold ->cred_guard_mutex to protect against* PTRACE_ATTACH or seccomp thread-sync
put_pipe_info
fifo_open
do_inode_permissionWe _really_ want to just do "generic_permission()" without* even looking at the inode->i_op values. So we keep a cache* flag in inode->i_opflags, that says "this has not special* permission function, use the fast case".
vfs_tmpfile
vfs_linkvfs_link - create a new link*@old_dentry: object to be linked*@dir: new parent*@new_dentry: where to create the new link*@delegated_inode: returns inode needing a delegation break* The caller must hold dir->i_mutex* If vfs_link discovers a delegation on
vfs_readlinkvfs_readlink - copy symlink body into userspace buffer*@dentry: dentry on which to get symbolic link*@buffer: user memory pointer*@buflen: size of buffer* Does not touch atime. That's up to the caller if necessary* Does not call security hook.
setfl
fcntl_rw_hint
fasync_remove_entryRemove a fasync entry. If successfully removed, return* positive and clear the FASYNC flag. If no entry exists,* do nothing and return 0.* NOTE! It is very important that the FASYNC flag always* match the state "is the filp on a fasync list".
fasync_insert_entryInsert a new entry into the fasync list. Return the pointer to the* old one if we didn't use the new one.* NOTE! It is very important that the FASYNC flag always* match the state "is the filp on a fasync list".
ioctl_fionbio
take_dentry_name_snapshot
d_drop
__dentry_kill
__lock_parent
dentry_killFinish off a dentry we've decided to kill.* Returns dentry requiring refcount drop, or NULL if we're done.
fast_dputTry to do a lockless dput(), and return whether that was successful
dget_parent
d_find_any_aliasd_find_any_alias - find any alias for a given inode*@inode: inode to find an alias for* If any aliases exist for the given inode, take and return a* reference for one of them. If no aliases exist, return %NULL.
__d_find_aliasd_find_alias - grab a hashed alias of inode*@inode: inode in question* If inode has a hashed alias, or is a directory and has any alias,* acquire the reference to alias and return it
d_find_alias获取索引节点在哈希表中的别名
d_prune_aliasesTry to kill dentries associated with this inode.* WARNING: you must own a reference to inode.
shrink_lock_dentryLock a dentry from shrink list
shrink_dentry_list
d_walkd_walk - walk the dentry tree*@parent: start of walk*@data: data passed to @enter() and @finish()*@enter: callback when first entering the dentry* The @enter() callbacks are called with d_lock held.
d_set_mountedCalled by mount code to set a mountpoint and check if the mountpoint is* reachable (e.g. NFS can unhash a directory dentry and then the complete* subtree can become unreachable).* Only one of d_invalidate() and d_set_mounted() must succeed. For
shrink_dcache_parent收缩高速缓存区
d_invalidate废除目录项
d_alloc分配高速缓存区
d_set_fallthrud_set_fallthru - Mark a dentry as falling through to a lower layer*@dentry - The dentry to mark* Mark a dentry as falling through to the lower layer (as set with* d_pin_lower()). This flag may be recorded on the medium.
__d_instantiate
d_instantiate为目录项建立索引
d_instantiate_newThis should be equivalent to d_instantiate() + unlock_new_inode(),* with lockdep-related part of unlock_new_inode() done before* anything else. Use that instead of open-coding d_instantiate()/* unlock_new_inode() combinations.
__d_instantiate_anon
__d_lookup__d_lookup - search for a dentry (racy)*@parent: parent dentry*@name: qstr of name we wish to find* Returns: dentry, or NULL* __d_lookup is like d_lookup, however it may (rarely) return a* false-negative result due to unrelated rename activity
d_delete删除目录项
d_rehash在哈希表中添加目录项
d_wait_lookup
d_alloc_parallel
__d_add
d_add添加目录项到哈希队列
d_exact_aliasd_exact_alias - find and hash an exact unhashed alias*@entry: dentry to add*@inode: The inode to go with this dentry* If an unhashed dentry with the same name/parent and desired* inode already exists, hash and return it. Otherwise, return* NULL.
__d_move__d_move - move a dentry*@dentry: entry to move*@target: new dentry*@exchange: exchange the two dentries* Update the dcache to reflect the move of a file name
d_splice_alias链接目录项
d_tmpfile
inode_sb_list_addde_sb_list_add - add inode to the superblock list of inodes*@inode: inode to add
inode_sb_list_del
__insert_inode_hash在哈希表中插入索引节点
__remove_inode_hash__remove_inode_hash - remove an inode from the hash*@inode: inode to unhash* Remove an inode from the superblock.
evictFree the inode passed in, removing it from the lists it is still connected* to
evict_inodesvict_inodes - evict all evictable inodes for a superblock*@sb: superblock to operate on* Make sure that no inodes with zero refcount are retained
invalidate_inodesvalidate_inodes - attempt to free all inodes on a superblock*@sb: superblock to operate on*@kill_dirty: flag to guide handling of dirty inodes* Attempts to free all inodes for a given superblock. If there were any
inode_lru_isolateIsolate the inode from the LRU in preparation for freeing it
find_inodeCalled with the inode lock held.
find_inode_fastd_inode_fast is the fast path version of find_inode, see the comment at* iget_locked for details.
new_inode_pseudow_inode_pseudo - obtain an inode*@sb: superblock* Allocates a new inode for given superblock.* Inode wont be chained in superblock s_inodes list* This means :* - fs can't be unmount* - quotas, fsnotify, writeback can't work
unlock_new_inodelock_new_inode - clear the I_NEW state and wake up any waiters*@inode: new inode to unlock* Called when the inode is fully initialised to clear the new state of the* inode and wake up anyone waiting for the inode to finish initialisation.
discard_new_inode
inode_insert5de_insert5 - obtain an inode from a mounted file system*@inode: pre-allocated inode to use for insert to cache*@hashval: hash value (usually inode number) to get*@test: callback used for comparisons between inodes*@set: callback used to initialize a new
iget_locked从文件系统上获得索引节点
test_inode_iuniquesearch the inode cache for a matching inode number.* If we find one, then the inode number we are trying to* allocate is not unique and so we should not use it.* Returns 1 if the inode number is unique, 0 if it is not.
iunique取索引节点ID
igrab
ilookup5_nowaitlookup5_nowait - search for an inode in the inode cache*@sb: super block of file system to search*@hashval: hash value (usually inode number) to search for*@test: callback used for comparisons between inodes*@data: opaque data pointer to pass to @test
ilookup在高速缓存查找索引节点
find_inode_nowaitd_inode_nowait - find an inode in the inode cache*@sb: super block of file system to search*@hashval: hash value (usually inode number) to search for*@match: callback used for comparisons between inodes*@data: opaque data pointer to pass to @match* Search
insert_inode_locked
iput_finalCalled when we're dropping the last reference* to an inode
__wait_on_freeing_inode
expand_fdtableExpand the file descriptor table.* This function will allocate a new fdtable and both fd array and fdset, of* the given size.* Return <0 error code on error; 1 on successful completion.
expand_filesExpand files.* This function will expand the file structures, if the requested size exceeds* the current capacity and there is room for expansion.* Return <0 error code on error; 0 when nothing done; 1 when files were
dup_fdAllocate a new files structure and copy contents from the* passed in files structure.* errorp will be valid only when the returned files_struct is NULL.
__alloc_fdallocate a file descriptor, mark it busy.
put_unused_fd
__fd_installInstall a file pointer in the fd array.* The VFS is full of places where we drop the files lock between* setting the open_fds bitmap and installing the file in the file* array. At any such point, we are vulnerable to a dup2() race
__close_fdThe same warnings as for __alloc_fd()/__fd_install() apply here...
__close_fd_get_filevariant of __close_fd that gets a ref on the file for later fput
do_close_on_exec
set_close_on_execWe only lock f_pos if we have threads or if the file might be* shared with another process. In both cases we'll have an elevated* file count (done either by fdget() or by fork()).
replace_fd
ksys_dup3
iterate_fd
__put_mountpointvfsmount lock must be held. Additionally, the caller is responsible* for serializing calls for given disposal list.
simple_xattr_getxattr GET operation for in-memory/pseudo filesystems
simple_xattr_setsimple_xattr_set - xattr SET operation for in-memory/pseudo filesystems*@xattrs: target simple_xattr list*@name: name of the extended attribute*@value: value of the xattr
simple_xattr_listxattr LIST operation for in-memory/pseudo filesystems
simple_xattr_list_addAdds an extended attribute to the list
scan_positivesReturns an element of siblings' list.* We are looking for th positive after

; if* found, dentry is grabbed and returned to caller.* If no such element exists, NULL is returned.

dcache_dir_lseek
dcache_readdirDirectory is locked and all positive dentries in it are safe, since* for ramfs-type trees they can't go away without unlink() or rmdir(),* both impossible due to the lock on directory.
simple_empty
simple_pin_fs
simple_release_fs
simple_transaction_get
locked_inode_to_wb_and_lock_list
inode_to_wb_and_lock_list
__inode_wait_for_writebackWait for writeback on an inode to complete. Called with i_lock held.* Caller must make sure inode cannot go away when we drop i_lock.
inode_wait_for_writebackWait for writeback on an inode to complete. Caller must have inode pinned.
__writeback_single_inodeWrite out an inode and its dirty pages. Do not update the writeback list* linkage. That is left to the caller. The caller is also responsible for* setting I_SYNC flag and calling inode_sync_complete() to clear it.
writeback_single_inodeWrite out an inode's dirty pages. Either the caller has an active reference* on the inode or the inode has I_WILL_FREE set.* This function is designed to be called for writing back one inode which* we go e
writeback_sb_inodesWrite a portion of b_io inodes which belong to @sb.* Return the number of pages and/or inodes written.* NOTE! This is called with wb->list_lock held, and will* unlock and relock that for each inode it ends up doing* IO for.
writeback_inodes_wb
wb_writebackExplicit flushing or periodic writeback of "old" data
block_dump___mark_inode_dirty
__mark_inode_dirty__mark_inode_dirty - internal function*@inode: inode to mark*@flags: what kind of dirty (i
wait_sb_inodesThe @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks
fsstack_copy_inode_sizedoes _NOT_ require i_mutex to be held.* This function cannot be inlined since i_size_{read,write} is rather* heavy-weight on 32-bit systems
set_fs_rootReplace the fs->{rootmnt,root} with {mnt,dentry}. Put the old values.* It can block.
set_fs_pwdReplace the fs->{pwdmnt,pwd} with {mnt,dentry}. Put the old values.* It can block.
chroot_fs_refs
exit_fs
copy_fs_struct
unshare_fs_struct
pin_remove
pin_insert
__find_get_block_slowVarious filesystems appear to want __find_get_block to be non-blocking
osync_buffers_listsync is designed to support O_SYNC io
mark_buffer_dirty_inode
__set_page_dirty_buffersAdd a page to the dirty page list.* It is a sad fact of life that this function is called from several places* deeply under spinlocking. It may not sleep.* If the page has buffers, the uptodate buffers are set dirty, to preserve
fsync_buffers_list
invalidate_inode_buffersInvalidate any and all dirty buffers on a given inode. We are* probably unmounting the fs, but that doesn't mean we have already* done a sync(). Just drop the buffers from the inode list.* NOTE: we take the inode's blockdev's mapping's private_lock. Which
remove_inode_buffersRemove any clean buffers from the inode's buffer list. This is called* when we're trying to free the inode itself. Those buffers can pin it.* Returns true if all buffers were removed.
grow_dev_pageCreate the page-cache page that contains the requested block.* This is used purely for blockdev mappings.
__bforgetrget() is like brelse(), except it discards any* potentially dirty data.
create_empty_buffersWe attach and possibly dirty the buffers atomically wrt* __set_page_dirty_buffers() via private_lock. try_to_free_buffers* is already excluded via the page lock.
attach_nobh_buffersAttach the singly-linked list of buffers created by nobh_write_begin, to* the page (converting it to circular linked list and taking care of page* dirty races).
try_to_free_buffers
bdev_write_inode
bdev_evict_inode
bdget
nr_blockdev_pages
bd_acquire
bd_forgetCall when you free inode
bd_prepare_to_claimd_prepare_to_claim - prepare to claim a block device*@bdev: block device of interest*@whole: the whole device containing @bdev, may equal @bdev*@holder: holder trying to claim @bdev* Prepare to claim @bdev
bd_start_claimingd_start_claiming - start claiming a block device*@bdev: block device of interest*@holder: holder trying to claim @bdev*@bdev is about to be opened exclusively
bd_finish_claimingd_finish_claiming - finish claiming of a block device*@bdev: block device of interest*@whole: whole block device (returned from bd_start_claiming())*@holder: holder that has claimed @bdev* Finish exclusive open of a block device
bd_abort_claimingd_abort_claiming - abort claiming of a block device*@bdev: block device of interest*@whole: whole block device (returned from bd_start_claiming())*@holder: holder that has claimed @bdev* Abort claiming of a block device when the exclusive open failed
blkdev_put
iterate_bdevs
fsnotify_unmount_inodessnotify_unmount_inodes - an sb is unmounting. handle any watched inodes.*@sb: superblock being unmounted.* Called during unmount with no locks held, so needs to be safe against* concurrent modifiers. We temporarily drop sb->s_inode_list_lock and CAN block.
__fsnotify_update_child_dentry_flagsGiven an inode, first check if we care what happens to our children. Inotify* and dnotify both tell their parents about events. If we care about any event* on a child we run all of our children and set a dentry flag saying that the* parent cares
fsnotify_destroy_event
fsnotify_add_eventAdd an event to the group notification queue
fsnotify_flush_notifyCalled when a group is being torn down to clean up any outstanding* event notifications.
fsnotify_group_stop_queueingStop queueing new events for this group. Once this function returns* fsnotify_add_event() will not add any new events to the group's queue.
fsnotify_recalc_maskCalculate mask of events for a list of marks. The caller must make sure* connector and connector->obj cannot disappear under us. Callers achieve* this by holding a mark->lock or mark->group->mark_mutex for a mark on this* list.
fsnotify_connector_destroy_workfn
fsnotify_put_mark
fsnotify_get_mark_safeGet mark reference when we found the mark via lockless traversal of object* list. Mark can be already removed from the list by now and on its way to be* destroyed once SRCU period ends.* Also pin the group so it doesn't disappear under us.
fsnotify_detach_markMark mark as detached, remove it from group list
fsnotify_free_markFree fsnotify mark
fsnotify_grab_connectorGet mark connector, make sure it is alive and return with its lock held.* This is for users that get connector pointer from inode or mount. Users that* hold reference to a mark on the list may directly lock connector->lock as
fsnotify_add_mark_listAdd mark into proper place in given list of marks. These marks may be used* for the fsnotify backend to determine which event types should be delivered* to which group and for which inodes. These marks are ordered according to
fsnotify_add_mark_lockedAttach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group.
fsnotify_destroy_marksDestroy all marks attached to an object via connector
fsnotify_mark_destroy_workfn
dnotify_handle_eventMains fsnotify call where events are delivered to dnotify.* Find the dnotify mark on the relevant inode, run the list of dnotify structs* on that mark and determine which of them has expressed interest in receiving* events of this type
dnotify_flushCalled every time a file is closed. Looks first for a dnotify mark on the* inode. If one is found run all of the ->dn structures attached to that* mark for one relevant to this process closing the file and remove that* dnotify_struct
fcntl_dirnotifyWhen a process calls fcntl to attach a dnotify watch to a directory it ends* up here. Allocate both a mark for fsnotify to add and a dnotify_struct to be* attached to the fsnotify_mark.
inotify_polly userspace file descriptor functions
inotify_read
inotify_ioctl
inotify_add_to_idr
inotify_idr_find
inotify_remove_from_idrRemove the mark from the idr (if present) and drop the reference* on the mark because it was in the idr.
inotify_update_existing_watch
fanotify_get_responseWait for response to permission event
get_one_eventGet an fsnotify notification event if one exists and is small* enough to fit in "count". Return an error pointer if the count* is not large enough. When permission event is dequeued, its state is* updated accordingly.
process_access_response
fanotify_polly userspace file descriptor functions
fanotify_read
fanotify_release
fanotify_ioctl
fanotify_mark_remove_from_mask
fanotify_mark_add_to_mask
ep_removeRemoves a "struct epitem" from the eventpoll RB tree and deallocates* all the associated resources. Must be called with "mtx" held.
ep_insertMust be called with "mtx" held.
__timerfd_remove_cancel
timerfd_remove_cancel
timerfd_setup_cancel
userfaultfd_ctx_read
put_aio_ring_file
aio_ring_mremap
aio_migratepage
ioctx_add_table
aio_nr_sub
ioctx_allocx_alloc* Allocates and initializes an ioctx. Returns an ERR_PTR if it failed.
kill_ioctxkill_ioctx* Cancels all outstanding aio requests on an aio context. Used* when the processes owning a context have all exited to encourage* the rapid destruction of the kioctx.
aio_poll_cancelassumes we are called with irqs disabled
aio_poll
io_poll_remove_one
io_poll_add
__fscrypt_prepare_lookup
evict_dentries_for_decrypted_inodes
check_for_busy_inodes
put_crypt_info
fscrypt_get_encryption_info
find_or_insert_direct_keyFind/insert the given key into the fscrypt_direct_keys table. If found, it* is returned with elevated refcount, and 'to_insert' is freed if non-NULL. If* not found, 'to_insert' is inserted and returned if it's non-NULL; otherwise* NULL is returned.
locks_move_blocks
locks_insert_global_locksMust be called with the flc_lock held!
locks_delete_global_locksMust be called with the flc_lock held!
locks_delete_blocklocks_delete_lock - stop waiting for a file lock*@waiter: the lock which was waiting* lockd/nfsd need to disconnect the lock while working on it.
locks_insert_blockMust be called with flc_lock held.
locks_wake_up_blocksWake up processes blocked waiting for blocker.* Must be called with the inode->flc_lock held!
posix_test_lock
flock_lock_inodeTry to create a FLOCK lock on filp
posix_lock_inode
__break_lease撤销所有未偿还的文件
lease_get_mtimelease_get_mtime - update modified time of an inode with exclusive lease*@inode: the inode*@time: pointer to a timespec which contains the last modified time* This is to force NFS clients to flush their caches for files with* exclusive leases
fcntl_getlease获取当前文件租约
generic_add_lease
generic_delete_lease
fcntl_setlkApply the lock described by l to an open file descriptor.* This implements both the F_SETLK and F_SETLKW commands of fcntl().
fcntl_setlk64Apply the lock described by l to an open file descriptor.* This implements both the F_SETLK and F_SETLKW commands of fcntl().
locks_remove_leaseThe i_flctx must be valid when calling into here
locks_remove_fileThis function is called on the last close of an open file.
show_fd_locks
locks_start
mb_cache_entry_createmb_cache_entry_create - create entry in cache*@cache - cache where the entry should be created*@mask - gfp mask with which the entry should be allocated*@key - key of the entry*@value - value of the entry*@reusable - is the entry reusable by others?
mb_cache_entry_deletemb_cache_entry_delete - remove a cache entry*@cache - cache we work with*@key - key*@value - value* Remove entry from cache @cache with key @key and value @value.
mb_cache_shrink
locks_start_gracelocks_start_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* A grace period is a period during which locks should not be given* out
locks_end_gracelocks_end_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* Call this function to state that the given lock manager is ready to* resume regular locking. The grace period will not end until all lock
drop_pagecache_sb
get_vfsmount_from_fd
register_quota_format
unregister_quota_format
find_quota_format
dquot_mark_dquot_dirtyMark dquot dirty in atomic manner, and return it's old dirty flag state
clear_dquot_dirty
mark_info_dirty
invalidate_dquotsInvalidate all dquots on the list
dquot_scan_activeCall callback for every active dquot on given filesystem
dquot_writeback_dquotsWrite all dquot structures to quota files
dqcache_shrink_scan
dqputPut reference to dquot
dqgetGet reference to dquot* Locking is slightly tricky here. We are guarded from parallel quotaoff()* destroying our dquot by:* a) checking for quota flags under dq_list_lock and* b) getting a reference to dquot before we release dq_list_lock
add_dquot_refThis routine is guarded by s_umount semaphore
remove_inode_dquot_refRemove references to dquots from inode and add dquot to list for freeing* if we have the last reference to dquot
remove_dquot_ref
dquot_add_inodes
dquot_add_space
__dquot_initialize
__dquot_dropRelease all quotas referenced by inode.* This function only be called on inode free or converting* a file to quota file, no other users for the i_dquot in* both cases, so we needn't call synchronize_srcu() after* clearing i_dquot.
inode_get_rsv_space
__dquot_alloc_spaceThis operation can block, but only after everything is updated
dquot_alloc_inodeThis operation can block, but only after everything is updated
dquot_claim_space_nodirtyConvert in-memory reserved quotas to real consumed quotas
dquot_reclaim_space_nodirtyConvert allocated space back to in-memory reserved quotas
__dquot_free_spaceThis operation can block, but only after everything is updated
dquot_free_inodeThis operation can block, but only after everything is updated
__dquot_transferTransfer the number of inode and blocks from one diskquota to an other
dquot_disableTurn quota off on a device. type == -1 ==> quotaoff for all types (umount)
dquot_load_quota_sb
dquot_resumeReenable quotas on remount RW
dquot_quota_enable
dquot_quota_disable
do_get_dqblkGeneric routine for getting common part of quota structure
do_set_dqblkGeneric routine for setting common part of quota structure
dquot_get_stateGeneric routine for getting common part of quota file information
dquot_set_dqinfoGeneric routine for setting common part of quota file information
v1_write_file_info
v2_write_file_infoWrite information header to quota file
qtree_write_dquotWe don't have to be afraid of deadlocks as we never have quotas on quota* files...
qtree_read_dquot
alloc_dcookie
free_dcookie
write_seqlock当前CPU负责更新时间
read_seqlock_exclA locking reader exclusively locks out other writers and locking readers,* but doesn't update the sequence number. Acts like a normal spin_lock/unlock.* Don't need preempt_disable() because that is in the spin_lock already.
task_lockProtects ->fs, ->files, ->mm, ->group_info, ->comm, keyring* subscriptions and synchronises with wait4(). Also used in procfs. Also* pins the final release of task.io_context. Also protects ->cpuset and* ->cgroup.subsys[]. And ->vfork_done.
dont_mount
d_lookup_done
parent_ino
pmd_lock
pud_lock
huge_pte_lock
wb_domain_size_changedwb_domain_size_changed - memory available to a wb_domain has changed*@dom: wb_domain of interest* This function should be called when the amount of memory available to*@dom has changed
__netif_tx_lock
netif_tx_lock抢网络设备发送锁
netif_addr_lock
get_fs_root
get_fs_pwd
nfs_mark_for_revalidate
exp_funnel_lockFunnel-lock acquisition for expedited grace periods
rcu_exp_wait_wakeWait for the current expedited grace period to complete, and then* wake up everyone who piggybacked on the just-completed expedited* grace period. Also update all the ->exp_seq_rq counters as needed* in order to avoid counter-wrap problems.
ptr_ring_full
ptr_ring_produceNote: resize (below) nests producer lock within consumer lock, so if you* consume in interrupt or BH context, you must disable interrupts/BH when* calling this.
ptr_ring_empty
ptr_ring_consumeNote: resize (below) nests producer lock within consumer lock, so if you* call this in interrupt or BH context, you must disable interrupts/BH when* producing.
ptr_ring_consume_batched
ptr_ring_unconsumeReturn entries into ring
ptr_ring_resizeNote: producer lock is nested within consumer lock, so if you* resize you must make sure all uses nest correctly.* In particular if you consume ring in interrupt or BH context, you must* disable interrupts/BH when doing so.
ptr_ring_resize_multipleNote: producer lock is nested within consumer lock, so if you* resize you must make sure all uses nest correctly.* In particular if you consume ring in interrupt or BH context, you must* disable interrupts/BH when doing so.
ipc_lock_object