函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\list.h Create Date:2022-07-27 06:38:25
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:删除链表项

函数原型:static inline void list_del(struct list_head *entry)

返回类型:void

参数:

类型参数名称
struct list_head *entry
139  删除链表项
140  链表后项等于These are non-NULL pointers that will result in page faults* under normal circumstances, used to verify that nobody uses* non-initialized list entries.
141  链表前项等于LIST_POISON2
调用者
名称描述
uevent_net_exit
klist_release
rhashtable_walk_exit释放迭代器
rhashtable_walk_start_checkhashtable_walk_start_check - Start a hash table walk*@iter: Hash table iterator* Start a hash table walk at the current iterator position. Note that we take* the RCU lock in all cases including when we return an error. So you must
test_kmod_exit
free_ptr_list
kunit_resource_remove
kunit_cleanup
string_stream_fragment_free
gen_pool_destroy销毁内存池
free_rs释放长期不使用控制结构
__irq_poll_complete__irq_poll_complete - Mark this @iop as un-polled again*@iop: The parent iopoll structure* Description:* See irq_poll_complete(). This function must be called with interrupts* disabled.
parman_prio_item_remove
parman_prio_finiparman_prio_fini - finalizes use of parman priority chunk*@prio: parman prio structure* Note: all locking must be provided by the caller.
objagg_obj_destroy
objagg_hints_flush
list_test_list_del
list_test_list_for_each_safe
list_test_list_for_each_prev_safe
allocate_threshold_blocks
deallocate_threshold_block
domain_remove_cpu
free_all_child_rdtgrp
rmdir_all_subForcibly remove all of subdirectories under root.
rdtgroup_mkdir_ctrl_monThese are rdtgroups created under the root directory. Can be used* to allocate and monitor resources.
rdtgroup_rmdir_mon
rdtgroup_ctrl_remove
alloc_rmidAs of now the RMIDs allocation is global.* However we keep track of which packages the RMIDs* are used to optimize the limbo list management.
dom_data_init
pseudo_lock_cstates_relax
__remove_pin_from_irq
__mmput
__exit_umh
worker_detach_from_poolworker_detach_from_pool() - detach a worker from its pool*@worker: worker which is attached to its pool* Undo the attaching which had been done in worker_attach_to_pool(). The* caller worker shouldn't access to the pool after detached except it has
maybe_kfree_parameterDoes nothing if parameter wasn't kmalloced above.
smpboot_unregister_percpu_threadsmpboot_unregister_percpu_thread - Unregister a per_cpu thread related to hotplug*@plug_thread: Hotplug thread descriptor* Stops all threads on all possible cpus.
__wake_up_commonThe core wakeup function
psi_trigger_destroy
__down_commonBecause this function is inlined, the 'state' parameter will be* constant, and thus optimised away by the compiler. Likewise the* 'timeout' parameter for the cases without timeouts.
__up
rwsem_down_read_slowpathWait for the read lock to be granted
rwsem_down_write_slowpathWait until we successfully acquire the write lock
pm_qos_flags_remove_reqpm_qos_flags_remove_req - Remove device PM QoS flags request.*@pqf: Device PM QoS flags set to remove the request from.*@req: Request to remove from the set.
pm_vt_switch_unregisterpm_vt_switch_unregister - stop tracking a device's VT switching needs*@dev: device* Remove @dev from the vt switch list.
free_mem_extents_mem_extents - Free a list of memory extents.*@list: List of extents to free.
create_mem_extentsreate_mem_extents - Create a list of memory extents.*@list: List to put the extents into.*@gfp_mask: Mask to use for memory allocations.* The extents represent contiguous ranges of PFNs.
irq_remove_generic_chipq_remove_generic_chip - Remove a chip*@gc: Generic irq chip holding all data*@msk: Bitmask holding the irqs to initialize relative to gc->irq_base*@clr: IRQ_* bits to clear*@set: IRQ_* bits to set* Remove up to 32 interrupts starting from gc->irq_base.
irq_domain_removeq_domain_remove() - Remove an irq domain.*@domain: domain to remove* This routine is used to remove an irq domain. The caller must ensure* that all mappings within the domain have been disposed of prior to* use, depending on the revmap type.
rcu_torture_pipe_updateUpdate all callbacks in the pipe. Suitable for synchronous grace-period* primitives.
__klp_free_funcs
__klp_free_objects
klp_free_patch_startThis function implements the free operations that can be called safely* under klp_mutex.* The operation must be completed by calling klp_free_patch_finish()* outside klp_mutex.
klp_unpatch_func
klp_patch_func
hash_bucket_delRemove entry from a hash bucket list
__dma_entry_alloc
__clocksource_change_rating
SYSCALL_DEFINE1Delete a POSIX.1b interval timer.
itimer_deletern timer owned by the process, used by exit_itimers
clockevents_notify_releasedCalled after a notify add to make devices available which were* released from the notifier call.
clockevents_exchange_devicelockevents_exchange_device - release and request clock devices*@old: device to release (can be NULL)*@new: device to request (can be NULL)* Called from various tick functions with clockevents_lock held and* interrupts disabled.
kimage_free_page_list
kimage_alloc_page
put_css_set_locked
free_cgrp_cset_links
cgroup_destroy_root
cgroup_rm_cftypes_locked
css_task_iter_advance_css_setss_task_iter_advance_css_set - advance a task itererator to the next css_set*@it: the iterator to advance* Advance @it to the next css_set to walk.
css_task_iter_endss_task_iter_end - finish task iteration*@it: the task iterator to finish* Finish task iteration started by css_task_iter_start().
cgroup_pidlist_destroy_work_fn
free_cg_rpool_locked
audit_del_ruleRemove an existing rule from filterlist.
update_lsm_rule
audit_free_names
audit_remove_watch
audit_update_watchUpdate inode info in audit rules based on filesystem event.
audit_remove_parent_watchesRemove all watches & rules associated with a parent that is going away.
audit_remove_watch_rule
kill_rules
audit_trim_trees
audit_tag_tree
release_nodeRemove node from all lists and debugfs and release associated resources.* Needs to be called with node_lock held.
gcov_info_freegcov_info_free - release memory for profiling data set duplicate*@info: profiling data set duplicate to free
kcov_remote_area_getMust be called with kcov_remote_lock locked.
__unregister_kprobe_bottom
fei_attr_remove
relay_closelay_close - close the channel*@chan: the channel* Closes all channel buffers and frees the channel.
send_cpu_listenersSend taskstats data in @skb to listeners registered for @cpu's exit data
add_del_listener
tracepoint_module_going
rb_allocate_pages
get_tracing_log_err
clear_tracing_err_log
__remove_instance
__unregister_trace_eventUsed by module code with the trace_event_sem held for write.
unregister_stat_tracer
trace_destroy_fields
__put_system
remove_subsystem
remove_event_file_dir
event_remove
process_system_preds
del_named_triggerdel_named_trigger - delete a trigger from the named trigger list*@data: The trigger data to delete
remove_hist_vars
bpf_event_notify
__local_list_pop_free
__local_list_pop_pending
bpf_cgroup_storage_unlink
xsk_map_sock_delete
bpf_offload_dev_netdev_unregister
cgroup_bpf_releasegroup_bpf_release() - put references of all bpf programs and* release all cgroup bpf data*@work: work structure embedded into the cgroup to modify
__cgroup_bpf_attach__cgroup_bpf_attach() - Attach the program to a cgroup, and* propagate the change to descendants*@cgrp: The cgroup which descendants to traverse*@prog: A program to attach*@type: Type of attach operation*@flags: Option flags
__cgroup_bpf_detach__cgroup_bpf_detach() - Detach the program from a cgroup, and* propagate the change to descendants*@cgrp: The cgroup which descendants to traverse*@prog: A program to detach or NULL*@type: Type of detach operation* Must be called with cgroup_mutex held.
perf_sched_cb_dec
perf_event_release_kernelKill an event dead; while event:refcount will preserve the event* object, it will not preserve its functionality. Once the last 'user'* gives up the object, we'll destroy the thing.
free_filters_list
perf_pmu_migrate_context
toggle_bp_slotAdd/remove the given breakpoint in our constraint table
delayed_uprobe_delete
padata_free_shellpadata_free_shell - free a padata shell*@ps: padata shell to free
torture_shuffle_task_unregister_allUnregister all tasks, for example, at the end of the torture run.
dir_utime
read_cache_pages_invalidate_pageslease a list of pages, invalidating them first if need be
read_cache_pagesad_cache_pages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These* pages have their ->index populated and are otherwise uninitialised.
read_pages
put_pages_listput_pages_list() - release a list of pages*@pages: list of pages threaded on page->lru* Release a list of pages which are strung together on page.lru. Currently* used by read_cache_pages() and related error recovery code.
unregister_shrinkerRemove one
shrink_page_listshrink_page_list() returns the number of reclaimed pages
move_pages_to_lruThis moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is
shrink_active_list
reclaim_pages
shutdown_cache
release_freepages
split_map_pages
pgtable_trans_huge_withdraw "address" argument so destroys page coloring of some arch
unlink_anon_vmas
unlink_va
purge_fragmented_blocks
free_pcppages_bulkFrees a number of pages from the PCP lists* Assumes all pages on list are in same zone, and of same order.* count is the number of pages to free.* If the zone was previously in an "all pages pinned" state then look to
free_unref_page_listFree a list of 0-order pages
__rmqueue_pcplistRemove page from the per-cpu list, caller must protect the list
free_swap_count_continuations_swap_count_continuations - swapoff free all the continuation pages* appended to the swap_map, after swap_map is quiesced, before vfree'ing it.
dma_pool_createdma_pool_create - Creates a pool of consistent memory blocks, for dma
pool_free_page
dma_pool_destroydma_pool_destroy - destroys a pool of dma memory blocks.*@pool: dma pool that will be destroyed* Context: !in_interrupt()* Caller guarantees that no more memory from the pool is in use,* and that nothing will try to use the pool after this call.
add_reservation_in_rangeMust be called with resv->lock held. Calling this with count_only == true* will count the number of pages to be added but will not modify the linked* list.
region_addAdd the huge page range represented by [f, t) to the reserve* map
region_delDelete the specified range [f, t) from the reserve map. If the* t parameter is LONG_MAX, this indicates that ALL regions after f* should be deleted. Locate the regions which intersect [f, t)* and either trim, delete or split the existing regions.
resv_map_release
__free_huge_page
free_pool_huge_pageFree huge page from pool from next node to free.* Attempt to keep persistent huge pages more or less* balanced over allowed nodes.* Called with hugetlb_lock locked.
dissolve_free_huge_pageDissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
clear_slob_page_free
remove_node_from_stable_tree
stable_tree_searchstable_tree_search - search for page inside the stable tree* This function checks if there is a page inside the stable tree* with identical content to the page that we are scanning right now
scan_get_next_rmap_item
__ksm_exit
slabs_destroy
drain_freelist
fixup_slab_list
get_valid_first_slabTry to find non-pfmemalloc slab if needed
free_blockCaller needs to acquire correct kmem_cache_node's list_lock*@list: List of detached free slabs should be freed by caller
remove_partial
putback_movable_pagesPut previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page
unmap_and_moveObtain the lock on page, remove all ptes and migrate the page* to the newly allocated page in newpage.
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
free_transhuge_page
__khugepaged_exit
collect_mm_slot
mem_cgroup_oom_unregister_event
vmpressure_unregister_eventvmpressure_unregister_event() - Unbind eventfd from vmpressure*@memcg: memcg handle*@eventfd: eventfd context that was used to link vmpressure with the @cg* This function does internal manipulations to detach the @eventfd from* the vmpressure
mem_pool_allocMemory pool allocation and freeing. kmemleak_lock must not be held.
scan_gray_listScan the objects already referenced (gray objects). More objects will be* referenced and, if there are no memory leaks, all the objects are scanned.
kmemleak_test_exit
zpool_unregister_driverzpool_unregister_driver() - unregister a zpool implementation
zpool_destroy_poolzpool_destroy_pool() - Destroy a zpool*@zpool: The zpool to destroy.* Implementations must guarantee this to be thread-safe,* however only when destroying different pools. The same* pool should only be destroyed once, and should not be used
zbud_alloczbud_alloc() - allocates a region of a given size*@pool: zbud pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt to
zbud_freezbud_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by zbud_alloc()* In the case that the zbud page in which the allocation resides is
zbud_reclaim_pagezbud_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* zbud reclaim is different from
free_pages_work
z3fold_allocz3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt
z3fold_freez3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides
balloon_page_list_enqueuealloon_page_list_enqueue() - inserts a list of pages into the balloon page* list
ss_del
pipelined_send
do_msgrcv
unlink_queue
freearyFree a semaphore set. freeary() is called with sem_ids.rwsem locked* as a writer and the spinlock for this semaphore set hold. sem_ids.rwsem* remains locked on exit.
exit_semadd semadj values to semaphores, free undo structures
shm_rmid
exit_shmLocking assumes this will only be called with task == current
msg_get
mqueue_evict_inode
wq_sleepPuts current task to sleep. Caller must hold queue lock. After return* lock isn't held.
pipelined_sendpipelined_send() - send a message directly to the task waiting in* sys_mq_timedreceive() (without inserting message into a queue).
pipelined_receivepipelined_receive() - if there is task waiting in sys_mq_timedsend()* gets its message and put to the queue (we have one free place for sure).
flush_plug_callbacks
blk_mq_elv_switch_back
blkcg_css_free
bfq_idle_extractq_idle_extract - extract an entity from the idle tree.*@st: the service tree of the owning @entity.*@entity: the entity being removed.
bfq_active_extractq_active_extract - remove an entity from the active tree.*@st: the service_tree containing the tree.*@entity: the entity being removed.
add_suspend_info
clean_opal_dev
key_gc_unused_keysGarbage collect a list of unreferenced, detached keys
keyring_destroy
avc_xperms_free
smack_cred_freesmack_cred_free - "free" task-level security credentials*@cred: the credentials in question
tomoyo_read_logmoyo_read_log - Read an audit log.*@head: Pointer to "struct tomoyo_io_buffer".* Returns nothing.
tomoyo_supervisormoyo_supervisor - Ask for the supervisor's decision
tomoyo_gc_threadmoyo_gc_thread - Garbage collector thread function.*@unused: Unused.* Returns 0.
tomoyo_notify_gcmoyo_notify_gc - Register/unregister /sys/kernel/security/tomoyo/ users.*@head: Pointer to "struct tomoyo_io_buffer".*@is_register: True if register, false if unregister.* Returns nothing.
aa_get_buffer
destroy_buffers
dev_exceptions_copyalled under devcgroup_mutex
ima_delete_rulesma_delete_rules() called to cleanup invalid in-flight policy.* We don't need locking as we operate on the temp list, which is* different from the active one. There is also only one user of* ima_delete_rules() at a time.
init_evm
unregister_binfmt
mntput_no_expire
simple_xattr_setsimple_xattr_set - xattr SET operation for in-memory/pseudo filesystems*@xattrs: target simple_xattr list*@name: name of the extended attribute*@value: value of the xattr
mpage_readpagesmpage_readpages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These
ep_call_nestedp_call_nested - Perform a bound (possibly) nested call, by checking* that the recursion limit is not exceeded, and that* the same nested call (by the meaning of same cookie) is* no re-entered.
ep_unregister_pollwaitThis function unregisters poll callbacks from the associated file* descriptor. Must be called with "mtx" held (or "epmutex" if called from* ep_free).
ep_loop_checkp_loop_check - Performs a check to verify that adding an epoll file (@file)* another epoll file (represented by @ep) does not create* closed loops or too deep chains.*@ep: Pointer to the epoll private data structure.
handle_userfaultThe locking rules involved in returning VM_FAULT_RETRY depending on* FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and* FAULT_FLAG_KILLABLE are not straightforward
dup_userfaultfd_complete
userfaultfd_unmap_complete
userfaultfd_ctx_read
aio_remove_iocb
aio_poll_wake
io_cqring_overflow_flushReturns true if there are no backlogged entries after the flush
__io_free_req
io_iopoll_completeFind and free completed poll iocbs
put_crypt_info
mb_cache_destroymb_cache_destroy - destroy cache*@cache: the cache to destroy* Free all entries in cache and cache itself. Caller must make sure nobody* (except shrinker) can reach @cache when calling this.
iomap_next_page
remove_inuse
dcookie_exit
dcookie_unregister
list_swaplist_swap - replace entry1 with entry2 and re-add entry1 at entry2's position*@entry1: the location to place entry2*@entry2: the location to place entry1
__remove_wait_queue
del_page_from_free_area
tcp_rtx_queue_unlink_and_free
resource_list_del
del_page_from_lru_list
balloon_page_delete
balloon_page_popalloon_page_pop - remove a page from a page list.*@head : pointer to list*@page : page to be added* Caller must ensure the page is private and protect the list.