函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\locking\mutex.c Create Date:2022-07-27 10:47:41
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.

函数原型:void __sched mutex_unlock(struct mutex *lock)

返回类型:void

参数:

类型参数名称
struct mutex *lock
740  __mutex_unlock_slowpath(lock, _RET_IP_)
调用者
名称描述
kobject_uevent_envkobject_uevent_env - send an uevent with environmental data*@kobj: struct kobject that the action is happening to*@action: action that is happening*@envp_ext: pointer to environmental data* Returns 0 if kobject_uevent_env() is completed with success or the
uevent_net_rcv_skb
uevent_net_init
uevent_net_exit
logic_pio_register_rangelogic_pio_register_range - register logical PIO range for a host*@new_range: pointer to the IO range to be registered.* Returns 0 on success, the error code in case of failure.* Register a new IO range node in the IO range list.
logic_pio_unregister_rangelogic_pio_unregister_range - unregister a logical PIO range for a host*@range: pointer to the IO range which has been already registered.* Unregister a previously-registered IO range node.
rht_deferred_worker
rhashtable_free_and_destroyhashtable_free_and_destroy - free elements and destroy hash table*@ht: the hash table to destroy*@free_fn: callback to release resources of element*@arg: pointer passed to free_fn* Stops an eventual async resize. If defined, invokes free_fn for each
refcount_dec_and_mutex_lock_dec_and_mutex_lock - return holding mutex if able to decrement* refcount to 0*@r: the refcount*@lock: the mutex to be locked* Similar to atomic_dec_and_mutex_lock(), it will WARN on underflow and fail* to decrement when saturated at REFCOUNT_SATURATED
test_fw_misc_read
test_release_all_firmware
reset_store
config_show
config_name_store
config_test_show_strAs per sysfs_kf_seq_show() the buf is max PAGE_SIZE.
test_dev_config_update_bool
test_dev_config_show_bool
test_dev_config_show_int
test_dev_config_update_u8
test_dev_config_show_u8
config_num_requests_store
trigger_request_store
trigger_async_request_store
trigger_custom_fallback_store
trigger_batched_requests_storeWe use a kthread as otherwise the kernel serializes all our sync requests* and we would not be able to mimic batched requests on a sync call. Batched* requests on a sync call can for instance happen on a device driver when
trigger_batched_requests_async_store
read_firmware_show
test_firmware_exit
print_ht
run_request
tally_up_workXXX: add result option to display if all errors did not match.* For now we just keep any error code if one was found.* If this ran it means *all* tasks were created fine and we* are now just collecting results.
try_one_request
test_dev_kmod_stop_tests
config_show
trigger_config_run
kmod_config_free
config_test_driver_store
config_test_show_strAs per sysfs_kf_seq_show() the buf is max PAGE_SIZE.
config_test_fs_store
trigger_config_run_type
reset_store
test_dev_config_update_uint_sync
test_dev_config_update_uint_range
test_dev_config_update_int
test_dev_config_show_int
test_dev_config_show_uint
kmod_config_init
register_test_dev_kmod
unregister_test_dev_kmod
test_kmod_exit
expand_to_next_prime
free_primes
ww_test_normal
ww_test_edeadlk_normal
ww_test_edeadlk_normal_slow
ww_test_edeadlk_no_unlock
ww_test_edeadlk_no_unlock_slow
crc_t10dif_rehash
free_rs释放长期不使用控制结构
init_rs_internal_rs_internal - Allocate rs control, find a matching codec or allocate a new one*@symsize: the symbol size (number of bits)*@gfpoly: the extended Galois field generator polynomial coefficients,* with the 0th coefficient in the low order bit
within_error_injection_list
populate_error_injection_listLookup and populate the error_injection_list.* For safety reasons we only allow certain functions to be overridden with* bpf_error_injection, so we need to populate the list of the symbols that have* been marked as safe for overriding.
module_unload_ei_list
ei_seq_stop
ddebug_changeSearch the tables for _ddebug's which match the given `query' and* apply the `flags' and `mask' to them. Returns number of matching* callsites, normally the same as number of changes. If verbose,* logs the changes. Takes ddebug_lock.
ddebug_proc_stopSeq_ops stop method. Called at the end of each read()* call from userspace. Drops ddebug_lock.
ddebug_add_moduleAllocate a new ddebug_table for the given module* and add it to the global list.
ddebug_remove_moduleCalled in response to a module being unloaded. Removes* any ddebug_table's which point at the module.
ddebug_remove_all_tables
ldt_dup_contextCalled on fork from arch_dup_mmap(). Just copy the current LDT state,* the new task is not running, so nothing can be installed.
arch_jump_label_transform
arch_jump_label_transform_apply
init_espfix_ap
cpu_bugs_smt_update
enable_c02_store
max_time_store
mce_inject_log
set_ignore_ce
set_cmci_disabled
store_int_with_restart
mtrr_add_pagemtrr_add_page - Add a memory type region*@base: Physical base address of region in pages (in units of 4 kB!)*@size: Physical size of region in pages (4 kB)*@type: Type of MTRR desired*@increment: If this is true do usage counting on the region* Memory
mtrr_del_pagemtrr_del_page - delete a memory type region*@reg: Register returned by mtrr_add*@base: Physical base address*@size: Size of region* If register is supplied then base and size are ignored. This is* how drivers should call it.* Releases an MTRR region
reload_store
microcode_init
save_mc_for_earlySave this microcode patch. It will be loaded early when a CPU is* hot-added or resumes.
resctrl_online_cpu
resctrl_offline_cpu
rdt_last_cmd_status_show
rdt_bit_usage_showdt_bit_usage_show - Display current usage of resources* A domain is a shared resource that can now be allocated differently
rdtgroup_kn_unlock
rdt_get_tree
rdt_kill_sb
rdtgroup_setup_root
cqm_handle_limboHandler to scan the limbo list and move the RMIDs* to free list whose occupancy < threshold_occupancy.
mbm_handle_overflow
pseudo_lock_measure_cyclespseudo_lock_measure_cycles - Trigger latency measure to pseudo-locked region* The measurement of latency to access a pseudo-locked region should be* done from a cpu that is associated with that pseudo-locked region
rdtgroup_pseudo_lock_createdtgroup_pseudo_lock_create - Create a pseudo-locked region*@rdtgrp: resource group to which pseudo-lock region belongs* Called when a resource group in the pseudo-locksetup mode receives a* valid schemata that should be pseudo-locked
pseudo_lock_dev_open
pseudo_lock_dev_release
pseudo_lock_dev_mmap
do_ioctl
mp_map_pin_to_irq
mp_unmap_irq
mp_alloc_timer_irq
__amd_smn_rw
amd_df_indirect_readData Fabric Indirect Access uses FICAA/FICAD
sched_itmt_update_handler
sched_set_itmt_supportsched_set_itmt_support() - Indicate platform supports ITMT* This function is used by the OS to indicate to scheduler that the platform* is capable of supporting the ITMT feature.* The current scheme has the pstate driver detects if the system
sched_clear_itmt_supportsched_clear_itmt_support() - Revoke platform's support of ITMT* This function is used by the OS to indicate that it has* revoked the platform's support of ITMT feature
unwind_module_init
__cpuhp_state_add_instance_cpuslocked
__cpuhp_setup_state_cpuslocked__cpuhp_setup_state_cpuslocked - Setup the callbacks for an hotplug machine state*@state: The state to setup*@invoke: If true, the startup function is invoked for cpus where* cpu state >= @state*@startup: startup callback function*@teardown: teardown
__cpuhp_state_remove_instance
__cpuhp_remove_state_cpuslocked__cpuhp_remove_state_cpuslocked - Remove the callbacks for an hotplug machine state*@state: The state to remove*@invoke: If true, the teardown function is invoked for cpus where* cpu state >= @state
proc_do_static_key
ptrace_attach
fork_usermode_blobrk_usermode_blob - fork a blob of bytes as a usermode process*@data: a blob of bytes that can be do_execv-ed as a file*@len: length of the blob*@info: information about usermode process (shouldn't be NULL)* If info->cmdline is set it will be used as
__exit_umh
worker_attach_to_poolworker_attach_to_pool() - attach a worker to a pool*@worker: worker to be attached*@pool: the target pool* Attach @worker to @pool. Once attached, the %WORKER_UNBOUND flag and* cpu-binding of @worker are kept coordinated with the pool across
worker_detach_from_poolworker_detach_from_pool() - detach a worker from its pool*@worker: worker which is attached to its pool* Undo the attaching which had been done in worker_attach_to_pool(). The* caller worker shouldn't access to the pool after detached except it has
set_pf_worker
flush_workqueuelush_workqueue - ensure that any scheduled work has run to completion.*@wq: workqueue to flush* This function sleeps until all work items which were queued on entry* have finished execution, but it is not livelocked by new incoming ones.
drain_workqueuedrain_workqueue - drain a workqueue*@wq: workqueue to drain* Wait until the workqueue becomes empty. While draining is in progress,* only chain queueing is allowed. IOW, only currently pending or running
put_unbound_poolput_unbound_pool - put a worker_pool*@pool: worker_pool to put* Put @pool
pwq_unbound_release_workfnScheduled on system_wq by put_pwq() when an unbound pwq hits zero refcnt* and needs to be destroyed.
apply_wqattrs_commitset attrs and install prepared pwqs, @ctx points to old pwqs on return
apply_wqattrs_unlock
apply_workqueue_attrsapply_workqueue_attrs - apply new workqueue_attrs to an unbound workqueue*@wq: the target workqueue*@attrs: the workqueue_attrs to apply, allocated with alloc_workqueue_attrs()* Apply @attrs to an unbound workqueue @wq
wq_update_unbound_numawq_update_unbound_numa - update NUMA affinity of a wq for CPU hot[un]plug*@wq: the target workqueue*@cpu: the CPU coming up or going down*@online: whether @cpu is coming up or going down* This function is to be called from %CPU_DOWN_PREPARE, %CPU_ONLINE
alloc_and_link_pwqs
alloc_workqueue
destroy_workqueuedestroy_workqueue - safely terminate a workqueue*@wq: target workqueue* Safely destroy a workqueue. All work currently pending will be done first.
workqueue_set_max_activeworkqueue_set_max_active - adjust max_active of a workqueue*@wq: target workqueue*@max_active: new max_active value.* Set max_active of @wq to @max_active.* CONTEXT:* Don't call from IRQ context.
wq_worker_commsed to show worker information through /proc/PID/{comm,stat,status}
workqueue_init_earlyworkqueue_init_early - early init for workqueue subsystem* This is the first half of two-staged workqueue subsystem initialization* and invoked as soon as the bare basics - memory allocation, cpumasks and* idr are up
workqueue_initworkqueue_init - bring workqueue subsystem fully online* This is the latter half of two-staged workqueue subsystem initialization* and invoked as soon as kthreads can be created and scheduled
SYSCALL_DEFINE4Reboot system call: for obvious reasons only root may call it,* and even root needs to set up some magic numbers in the registers* so that some mistake won't make this reboot the whole machine.* You can also set the meaning of the ctrl-alt-del-key here.
smpboot_create_threads
smpboot_unpark_threads
smpboot_park_threads
smpboot_register_percpu_threadsmpboot_register_percpu_thread - Register a per_cpu thread related* to hotplug*@plug_thread: Hotplug thread descriptor* Creates and starts the threads on all online cpus.
smpboot_unregister_percpu_threadsmpboot_unregister_percpu_thread - Unregister a per_cpu thread related to hotplug*@plug_thread: Hotplug thread descriptor* Stops all threads on all possible cpus.
sched_rt_handler
sched_rr_handler
partition_sched_domainsCall with hotplug lock held
sugov_work
sugov_init
sugov_exit
sugov_limits
psi_avgs_work
psi_poll_work
psi_show
psi_trigger_create
psi_trigger_destroy
psi_write
ww_mutex_unlockww_mutex_unlock - release the w/w mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously with any of the* ww_mutex_lock* functions (with or without an acquire context). It is
atomic_dec_and_mutex_lockatomic_dec_and_mutex_lock - return holding mutex if we dec to 0*@cnt: the atomic which we are to dec*@lock: the mutex to return holding if we dec to 0* return true and hold lock if we dec to 0, return false otherwise
torture_mutex_unlock
pm_vt_switch_requiredpm_vt_switch_required - indicate VT switch at suspend requirements*@dev: device*@required: if true, caller needs VT switch at suspend/resume time* The different console drivers may or may not require VT switches across* suspend/resume, depending on how
pm_vt_switch_unregisterpm_vt_switch_unregister - stop tracking a device's VT switching needs*@dev: device* Remove @dev from the vt switch list.
pm_vt_switchThere are three cases when a VT switch on suspend/resume are required:* 1) no driver has indicated a requirement one way or another, so preserve* the old behavior* 2) console suspend is disabled, we want to see debug messages across* suspend/resume* 3)
enter_stater_state - Do common work needed to enter system sleep state.*@state: System sleep state to enter.* Make sure that no one else is trying to put the system into a sleep state.* Fail if that's not the case. Otherwise, prepare for system suspend, make the
software_resumesoftware_resume - Resume from a saved hibernation image.* This routine is called as a late initcall, when all devices have been* discovered and initialized already.* The image reading code is called to see if there is a hibernation image
snapshot_ioctl
try_to_suspend
pm_autosleep_unlock
pm_autosleep_set_state
pm_show_wakelocks
pm_wake_lock
pm_wake_unlock
em_register_perf_domainm_register_perf_domain() - Register the Energy Model of a performance domain*@span : Mask of CPUs in the performance domain*@nr_states : Number of capacity states to register*@cb : Callback functions providing the data of the Energy Model* Create Energy
devkmsg_read
irq_mark_irq
irq_free_descsq_free_descs - free irq descriptors*@from: Start of descriptor range*@cnt: Number of consecutive irqs to free
__irq_alloc_descs__irq_alloc_descs - allocate and initialize a range of irq descriptors*@irq: Allocate for specific irq number if irq >= 0*@from: Start the search from this irq number*@cnt: Number of consecutive irqs to allocate
__setup_irq注册中断
__free_irqInternal function to unregister an irqaction - used to free* regular and special interrupts that are part of the architecture.
probe_irq_mask扫描中断线图
probe_irq_offprobe_irq_off - end an interrupt autodetect*@val: mask of potential interrupts (unused)* Scans the unused interrupt lines and returns the line which* appears to have triggered the interrupt. If no interrupt was* found then zero is returned
__irq_domain_add__irq_domain_add() - Allocate a new irq_domain data structure*@fwnode: firmware node for the interrupt controller*@size: Size of linear map; 0 for radix mapping only*@hwirq_max: Maximum number of interrupts supported by controller*@direct_max: Maximum
irq_domain_removeq_domain_remove() - Remove an irq domain.*@domain: domain to remove* This routine is used to remove an irq domain. The caller must ensure* that all mappings within the domain have been disposed of prior to* use, depending on the revmap type.
irq_domain_update_bus_token
irq_find_matching_fwspecq_find_matching_fwspec() - Locates a domain for a given fwspec*@fwspec: FW specifier for an interrupt*@bus_token: domain-specific data
irq_domain_check_msi_remapq_domain_check_msi_remap - Check whether all MSI irq domains implement* IRQ remapping* Return: false if any MSI irq domain does not support IRQ remapping,* true otherwise (including if there is no MSI irq domain)
irq_domain_clear_mapping
irq_domain_set_mapping
irq_domain_associate
register_irq_proc在系统文件中注册中断进程
srcu_gp_endNote the end of an SRCU grace period. Initiates callback invocation* and starts a new grace period if needed.* The ->srcu_cb_mutex acquisition does not protect any data, but* instead prevents more than one grace period from starting while we
srcu_barriersrcu_barrier - Wait until all in-flight call_srcu() callbacks complete.*@ssp: srcu_struct on which to wait for in-flight callbacks.
srcu_advance_stateCore SRCU state machine. Push state bits of ->srcu_gp_seq* to SRCU_STATE_SCAN2, and invoke srcu_gp_end() when scan has* completed in that state.
rcu_torture_boost
rcutorture_booster_cleanup
rcutorture_booster_init
rcu_barrier_barrier - Wait until all in-flight call_rcu() callbacks complete
klp_find_object_modulesets obj->mod if object is not vmlinux and module is found
klp_find_object_symbol
enabled_store
force_store
klp_init_object_loadedparts of the initialization that is done only when the object is loaded
klp_enable_patchklp_enable_patch() - enable the livepatch*@patch: patch to be enabled* Initializes the data structure associated with the patch, creates the sysfs* interface, performs the needed symbol lookups and code relocations,
klp_module_coming
klp_module_going
klp_transition_work_fnThis work can be performed periodically to finish patching or unpatching any* "straggler" tasks which failed to transition in the first attempt.
kcmp_unlock
kcmp_lock
clocksource_done_bootinglocksource_done_booting - Called near the end of core bootup* Hack to avoid lots of clocksource churn at boot time.* We use fs_initcall because we want this to start before* device_initcall but after subsys_initcall.
__clocksource_register_scale__clocksource_register_scale - Used to install new clocksources*@cs: clocksource to be registered*@scale: Scale factor multiplied against freq to get clocksource hz*@freq: clocksource frequency (cycles per second) divided by scale
clocksource_change_ratinglocksource_change_rating - Change the rating of a registered clocksource*@cs: clocksource to be changed*@rating: new rating
clocksource_unregisterlocksource_unregister - remove a registered clocksource*@cs: clocksource to be unregistered
boot_override_clocksource_override_clocksource - boot clock override*@str: override name* Takes a clocksource= boot argument and uses it* as the clocksource override name.
clockevents_unbind_deviceUnbind a clockevents device.
udelay_test_show
udelay_test_write
udelay_test_init
udelay_test_exit
wait_for_owner_exitingwait_for_owner_exiting - Block until the owner has exited*@ret: owner's current futex lock status*@exiting: Pointer to the exiting task* Caller must hold a refcount on @exiting.
futex_exit_recursiveex_exit_recursive - Set the tasks futex state to FUTEX_STATE_DEAD*@tsk: task to set the state on* Set the futex exit state of the task lockless. The futex waiter code* observes that state when a task is exiting and loops until the task has
futex_cleanup_end
resolve_symbolResolve a symbol for this module. I.e. if we find one, record usage.
free_moduleFree a module, remove from lists, etc.
finished_loadingIs this module of this name done loading? No locks held.
do_init_moduleThis is where the real work happens.* Keep it uninlined to provide a reliable breakpoint target, e.g. for the gdb* helper command 'lx-symbols'.
add_unformed_moduleWe try to place it in the list now to make sure it's unique before* we dedicate too many resources. In particular, temporary percpu* memory exhaustion.
complete_formation
load_moduleAllocate and load the module: note that size of section 0 is alwayszero, and we rely on this for optional sections.
m_stop
acct_get
acct_pin_kill
acct_on
SYSCALL_DEFINE1sys_acct - enable/disable process accounting*@name: file name for accounting records or NULL to shutdown accounting* Returns 0 for success or negative errno values for failure.* sys_acct() is the only system call needed to implement process* accounting
slow_acct_process
__crash_kexecNo panic_cpu check version of crash_kexec(). This function is called* only when panic_cpu holds the current CPU number; this is the only CPU* which processes crash_kexec routines.
crash_get_memory_size
crash_shrink_memory
kernel_kexecMove into place and start executing a preloaded standalone* executable. If nothing was preloaded return an error.
SYSCALL_DEFINE4
COMPAT_SYSCALL_DEFINE4
SYSCALL_DEFINE5
cgroup_destroy_root
cgroup_kn_unlockgroup_kn_unlock - unlocking helper for cgroup kernfs methods*@kn: the kernfs_node being serviced* This helper undoes cgroup_kn_lock_live() and should be invoked before* the method finishes if locking succeeded
cgroup_do_get_tree
cgroup_path_ns
task_cgroup_pathask_cgroup_path - cgroup path of a task in the first cgroup hierarchy*@task: target task*@buf: the buffer to write the path into*@buflen: the length of the buffer* Determine @task's cgroup on the first (the one with the lowest non-zero
cgroup_lock_and_drain_offlinegroup_lock_and_drain_offline - lock cgroup_mutex and drain offlined csses*@cgrp: root of the target subtree* Because css offlining is asynchronous, userland may try to re-enable a* controller while the previous css is still around. This function grabs
cgroup_rm_cftypesgroup_rm_cftypes - remove an array of cftypes from a subsystem*@cfts: zero-length name terminated array of cftypes* Unregister @cfts. Files described by @cfts are removed from all* existing cgroups and all future cgroups won't have them either. This
cgroup_add_cftypesgroup_add_cftypes - add an array of cftypes to a subsystem*@ss: target cgroup subsystem*@cfts: zero-length name terminated array of cftypes* Register @cfts to @ss
css_release_work_fn
css_killed_work_fnThis is called when the refcnt of a css is confirmed to be killed.* css_tryget_online() is now guaranteed to fail. Tell the subsystem to* initate destruction and put the css ref from kill_css().
cgroup_init_subsys
cgroup_initgroup_init - cgroup initialization* Register cgroup filesystem and /proc file, and initialize* any subsystems that didn't request early init.
proc_cgroup_showproc_cgroup_show()* - Print task's cgroup paths into seq_file, one line for each hierarchy* - Used for /proc//cgroup.
cgroup_get_from_pathgroup_get_from_path - lookup and get a cgroup from its default hierarchy path*@path: path on the default hierarchy* Find the cgroup at @path on the default hierarchy, increment its* reference count and return it
cgroup_bpf_attachsock->sk_cgrp_data handling. For more info, see sock_cgroup_data* definition in cgroup-defs.h.
cgroup_bpf_detach
cgroup_bpf_query
cgroup_attach_task_allgroup_attach_task_all - attach task 'tsk' to all cgroups of task 'from'*@from: attach to all cgroups of a given task*@tsk: the task to be attached
cgroup_transfer_tasksgroup_trasnsfer_tasks - move tasks from one cgroup to another*@to: cgroup to which the tasks will be moved*@from: cgroup in which the tasks currently reside* Locking rules between cgroup_post_fork() and the migration path* guarantee that, if a task is
cgroup1_pidlist_destroy_allUsed to destroy all pidlists lingering waiting for destroy timer. None* should be left afterwards.
cgroup_pidlist_destroy_work_fn
cgroup_pidlist_stop
proc_cgroupstats_showDisplay information about each subsystem and each hierarchy
cgroupstats_buildgroupstats_build - build and fill cgroupstats*@stats: cgroupstats to fill information into*@dentry: A dentry entry belonging to the cgroup for which stats have* been requested.* Build and fill cgroupstats so that taskstats can export it to user* space.
cgroup1_release_agentNotify userspace when a cgroup is released, by running the* configured release agent with the name of the cgroup (path* relative to the root of cgroup file system) as the argument
cgroup1_renamegroup_rename - Only allow simple rename of directories in place.
cgroup1_reconfigure
cgroup1_get_tree
freezer_css_onlinezer_css_online - commit creation of a freezer css*@css: css being created* We're committing to creation of @css. Mark it online and inherit* parent's freezing state while holding both parent's and our* freezer->lock.
freezer_css_offlinezer_css_offline - initiate destruction of a freezer css*@css: css being destroyed*@css is going away. Mark it dead and decrement system_freezing_count if* it was holding one.
freezer_attachTasks can be migrated into a different freezer anytime regardless of its* current state. freezer_attach() is responsible for making new tasks* conform to the current state.* Freezer state changes and task migration are synchronized via*@freezer->lock
freezer_forkzer_fork - cgroup post fork callback*@task: a task which has just been forked*@task has just been created and should conform to the current state of* the cgroup_freezer it belongs to. This function may race against* freezer_attach()
freezer_read
freezer_change_statezer_change_state - change the freezing state of a cgroup_freezer*@freezer: freezer of interest*@freeze: whether to freeze or thaw* Freeze or thaw @freezer according to @freeze. The operations are* recursive - all descendants of @freezer will be affected.
rdmacg_uncharge_hierarchydmacg_uncharge_hierarchy - hierarchically uncharge rdma resource count*@device: pointer to rdmacg device*@stop_cg: while traversing hirerchy, when meet with stop_cg cgroup* stop uncharging*@index: index of the resource to uncharge in cg in given resource
rdmacg_try_chargedmacg_try_charge - hierarchically try to charge the rdma resource*@rdmacg: pointer to rdma cgroup which will own this resource*@device: pointer to rdmacg device*@index: index of the resource to charge in cgroup (resource pool)
rdmacg_register_devicedmacg_register_device - register rdmacg device to rdma controller
rdmacg_unregister_devicedmacg_unregister_device - unregister rdmacg device from rdma controller
rdmacg_resource_set_max
rdmacg_resource_read
rdmacg_css_offlinedmacg_css_offline - cgroup css_offline callback*@css: css of interest* This function is called when @css is about to go away and responsible* for shooting down all rdmacg associated with @css
create_user_nsCreate a new user namespace, deriving the creator from the user in the* passed credentials, and replacing that user with the new root user for the* new namespace.* This is called by copy_creds(), which will finish setting the target task's* credentials.
map_write
proc_setgroups_write
userns_may_setgroups
create_pid_cachepreates the kmem cache to allocate pids from.*@level: pid namespace level
stop_cpusstop_cpus - stop multiple cpus*@cpumask: cpus to stop*@fn: function to execute*@arg: argument to @fn* Execute @fn(@arg) on online cpus in @cpumask. On each target cpu,*@fn is run in a process context with the highest priority
try_stop_cpusry_stop_cpus - try to stop multiple cpus*@cpumask: cpus to stop*@fn: function to execute*@arg: argument to @fn* Identical to stop_cpus() except that it fails with -EAGAIN if* someone else is already using the facility
stop_machine_from_inactive_cpustop_machine_from_inactive_cpu - stop_machine() from inactive CPU*@fn: the function to run*@data: the data ptr for the @fn()*@cpus: the cpus to run the @fn() on (NULL = any online cpu)* This is identical to stop_machine() but can be called from a CPU which
audit_ctl_unlockaudit_ctl_unlock - Drop the audit control lock
audit_add_ruleAdd rule to given filterlist if not a duplicate.
audit_del_ruleRemove an existing rule from filterlist.
audit_list_rules_sendaudit_list_rules_send - list the audit rules*@request_skb: skb of request we are replying to (used to target the reply)*@seq: netlink audit message sequence (serial) number
audit_update_lsm_rulesThis function will re-initialize the lsm_rule field of all applicable rules
audit_update_watchUpdate inode info in audit rules based on filesystem event.
audit_remove_parent_watchesRemove all watches & rules associated with a parent that is going away.
audit_add_watchFind a matching watch entry, or add this one.* Caller must hold audit_filter_mutex.
untag_chunk
create_chunkCall with group->mark_mutex held, releases it
tag_chunkhe first tagged inode becomes root of tree
trim_markedrim the uncommitted chunks from tree
audit_trim_trees
prune_tree_threadThat gets run when evict_chunk() ends up needing to kill audit_tree.* Runs from a separate thread.
audit_add_tree_rulealled with audit_filter_mutex
audit_tag_tree
audit_kill_trees... and that one is done if evict_chunk() decides to delay until the end* of syscall. Runs synchronously.
evict_chunkHere comes the stuff asynchronous to auditctl operations
audit_tree_freeing_mark
gcov_enable_eventsgcov_enable_events - enable event reporting through gcov_event()* Turn on reporting of profiling data load/unload-events through the* gcov_event() callback
gcov_module_notifierUpdate list and generate events when modules are unloaded.
gcov_seq_openpen() implementation for gcov data files. Create a copy of the profiling* data set and initialize the iterator and seq_file interface.
gcov_seq_writewrite() implementation for gcov data files. Reset profiling data for the* corresponding file. If all associated object files have been unloaded,* remove the debug fs node as well.
reset_writewrite() implementation for reset file. Reset all profiling data to zero* and remove nodes for which all associated object files are unloaded.
gcov_eventCallback to create/remove profiling files when code compiled with* -fprofile-arcs is loaded/unloaded.
__gcov_init__gcov_init is called by gcc-generated constructor code for each object* file compiled with -fprofile-arcs.
llvm_gcov_init
__get_insn_slot__get_insn_slot() - Find a slot on an executable page for an instruction.* We allocate an executable page if there's no room on existing ones.
__free_insn_slot
kprobe_optimizer
wait_for_kprobe_optimizerWait for completing optimization and unoptimization
try_to_optimize_kprobePrepare an optimized_kprobe and optimize it* NOTE: p must be a normal registered kprobe
optimize_all_kprobes
unoptimize_all_kprobes
proc_kprobes_optimization_handler
arm_kprobeArm a kprobe with text_mutex
disarm_kprobeDisarm a kprobe with text_mutex
register_aggr_kprobeThis is the second or subsequent kprobe at the address - handle* the intricacies
check_kprobe_reregReturn error if the kprobe is being re-registered
register_kprobe
unregister_kprobes
disable_kprobeDisable one kprobe
enable_kprobeEnable one kprobe
kprobes_module_callbackModule notifier call back, checking kprobes on the module
fei_retval_set
fei_retval_get
fei_seq_stop
fei_write
lockup_detector_cleanuplockup_detector_cleanup - Cleanup after cpu hotplug or sysctl changes* Caller must not hold the cpu hotplug rwsem.
proc_watchdog_commonmmon function for watchdog, nmi_watchdog and soft_watchdog parameter* caller | table->data points to | 'which'* -------------------|----------------------------|--------------------------* proc_watchdog | watchdog_user_enabled | NMI_WATCHDOG_ENABLED |* |
proc_watchdog_thresh/proc/sys/kernel/watchdog_thresh
proc_watchdog_cpumaskThe cpumask is the mask of possible cpus that the watchdog can run* on, not the mask of cpus it is actually running on. This allows the* user to specify a mask that will include cpus that have not yet* been brought online, if desired.
relay_resetlay_reset - reset the channel*@chan: the channel* This has the effect of erasing all data from all channel buffers* and restarting the channel in its initial state. The buffers* are not freed, so any mappings are still in effect.* NOTE
relay_prepare_cpu
relay_openlay_open - create a new relay channel*@base_filename: base name of files to create, %NULL for buffering only*@parent: dentry of parent directory, %NULL for root directory or buffer*@subbuf_size: size of sub-buffers*@n_subbufs: number of sub-buffers*@cb:
relay_late_setup_fileslay_late_setup_files - triggers file creation*@chan: channel to operate on*@base_filename: base name of files to create*@parent: dentry of parent directory, %NULL for root directory* Returns 0 if successful, non-zero otherwise
relay_closelay_close - close the channel*@chan: the channel* Closes all channel buffers and frees the channel.
relay_flushlay_flush - close the channel*@chan: the channel* Flushes all channel buffers, i.e. forces buffer switch.
tracepoint_probe_register_prioracepoint_probe_register_prio - Connect a probe to a tracepoint with priority*@tp: tracepoint*@probe: probe handler*@data: tracepoint data*@prio: priority of this function over other registered functions* Returns 0 if ok, error value on error
tracepoint_probe_unregisterracepoint_probe_unregister - Disconnect a probe from a tracepoint*@tp: tracepoint*@probe: probe function pointer*@data: tracepoint data* Returns 0 if ok, error value on error.
register_tracepoint_module_notifiergister_tracepoint_notifier - register tracepoint coming/going notifier*@nb: notifier block* Notifiers registered with this function are called on module* coming/going with the tracepoint_module_list_mutex held
unregister_tracepoint_module_notifierregister_tracepoint_notifier - unregister tracepoint coming/going notifier*@nb: notifier block* The notifier block callback should expect a "struct tp_module" data* pointer.
tracepoint_module_coming
tracepoint_module_going
ring_buffer_resizeg_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure.
ring_buffer_change_overwrite
trace_array_get
trace_array_putrace_array_put - Decrement the reference counter for this trace array.* NOTE: Use this when we no longer need the trace array returned by* trace_array_get_by_name(). This ensures the trace array can be later* destroyed.
trace_access_unlock
register_tracergister_tracer - register a tracer with the ftrace system.*@type: the plugin for the tracer* Register a new plugin tracer.
tracepoint_printk_sysctl
register_ftrace_export
unregister_ftrace_export
s_startThe current tracer is copied to avoid a global locking* all around.
__tracing_open
tracing_release
t_stop
tracing_trace_options_show
trace_set_options
tracing_set_trace_read
tracing_resize_ring_buffer
tracing_update_buffersracing_update_buffers - used by tracing facility to expand ring buffers* To save on memory when the tracing is never used on a system with it* configured in
tracing_set_tracer
tracing_thresh_write
tracing_open_pipe
tracing_release_pipe
tracing_wait_pipeMust be called with iter->mutex held.
tracing_read_pipeConsumer reader.
tracing_splice_read_pipe
tracing_entries_read
tracing_total_entries_read
tracing_set_clock
tracing_time_stamp_mode_show
tracing_set_time_stamp_abs
tracing_log_errracing_log_err - write an error to the tracing error log*@tr: The associated trace array for the error (NULL for top level array)*@loc: A string describing where the error occurred*@cmd: The tracing command that caused the error*@errs: The array of
clear_tracing_err_log
tracing_err_log_seq_stop
tracing_buffers_open
tracing_buffers_release
trace_options_write
trace_options_core_write
rb_simple_write
update_tracer_options
instance_mkdir
trace_array_get_by_namerace_array_get_by_name - Create/Lookup a trace array, given its name
trace_array_destroy
instance_rmdir
reset_stat_session
stat_seq_initInitialize the stat rbtree at each trace_stat file opening.* All of these copies and sorting are required on all opening* since the stats could have changed between two file sessions.
stat_seq_stop
register_stat_tracer
unregister_stat_tracer
hold_module_trace_bprintk_format
format_mod_stop
tracing_start_sched_switch
tracing_stop_sched_switch
kthread_fnkthread_fn - The CPU time sampling/hardware latency detection kernel thread* Used to periodically sample the CPU TSC via a call to get_sample. We* disable interrupts, which does (intentionally) introduce latency since we
hwlat_width_writehwlat_width_write - Write function for "width" entry*@filp: The active open file structure*@ubuf: The user buffer that contains the value to write*@cnt: The maximum number of bytes to write to "file"*@ppos: The current position in @file* This function
hwlat_window_writehwlat_window_write - Write function for "window" entry*@filp: The active open file structure*@ubuf: The user buffer that contains the value to write*@cnt: The maximum number of bytes to write to "file"*@ppos: The current position in @file* This function
stack_trace_sysctl
register_ftrace_graph
unregister_ftrace_graph
ftrace_clear_events
ftrace_clear_event_pids
put_system
__ftrace_set_clr_event
t_stop
p_stop
event_enable_read
event_enable_write
system_enable_read
f_stop
event_filter_read
event_filter_write
subsystem_open
ftrace_event_pid_write
trace_add_event_callAdd an additional event_call dynamically
trace_remove_event_callRemove an event_call
trace_module_notify
early_event_add_tracerThe top trace array already had its file descriptors created.* Now the files themselves need to be created.
reg_event_syscall_enter
unreg_event_syscall_enter
reg_event_syscall_exit
unreg_event_syscall_exit
perf_trace_init
perf_trace_destroy
print_subsystem_event_filter
apply_subsystem_event_filter
trigger_stop
trigger_show
event_trigger_regex_open
trigger_process_regex
event_trigger_regex_write
event_trigger_regex_release
register_event_commandCurrently we only register event commands from __init, so mark this* __init too.
unregister_event_commandCurrently we only unregister event commands from __init, so mark* this __init too.
event_inject_write
__create_synth_event
create_or_delete_synth_event
hist_show
bpf_get_raw_tracepoint_module
perf_event_attach_bpf_prog
perf_event_detach_bpf_prog
perf_event_query_prog_array
bpf_event_notify
trace_kprobe_module_exist
register_trace_kprobeRegister a trace_probe and probe_event
trace_kprobe_module_callbackModule notifier call back, checking event on the module
enable_boot_kprobe_events
dyn_event_register
dyn_event_release
create_dyn_event
dyn_event_seq_stop
dyn_events_release_alldyn_events_release_all - Release all specific events*@type: the dyn_event_operations * which filters releasing events* This releases all events which ->ops matches @type
register_trace_uprobeRegister a trace_uprobe and probe_event
uprobe_buffer_put
ftrace_clear_pids
ftrace_pid_reset
fpid_stop
ftrace_pid_write
register_ftrace_functiongister_ftrace_function - register a function for profiling*@ops - ops structure that holds the function for profiling
unregister_ftrace_functionregister_ftrace_function - unregister a function for profiling.*@ops - ops structure that holds the function to unregister* Unregister a function that was added to be called by ftrace profiling.
ftrace_enable_sysctl
bpf_map_mmap_openalled for any extra memory-mapped regions (except initial)
bpf_map_mmap_closealled for all unmapped memory region (including initial)
bpf_map_mmap
map_freeze
check_attach_btf_id
bpf_check
bpf_fd_array_map_update_elemly called from syscall
fd_array_map_delete_elem
prog_array_map_poke_track
prog_array_map_poke_untrack
bpf_trampoline_lookup
bpf_trampoline_link_prog
bpf_trampoline_unlink_progpf_trampoline_unlink_prog() should never fail.
bpf_trampoline_put
cgroup_bpf_releasegroup_bpf_release() - put references of all bpf programs and* release all cgroup bpf data*@work: work structure embedded into the cgroup to modify
perf_event_ctx_lock_nestedBecause of perf_event::ctx migration in sys_perf_event_open::move_group and* perf_pmu_migrate_context() we need some magic.* Those places that change perf_event::ctx will hold both* perf_event_ctx::mutex of the 'old' and 'new' ctx value.
perf_event_ctx_unlock
find_get_contextReturns a matching context with refcount and pincount.
perf_sched_delayedperf_sched_events : >0 events exist* perf_cgroup_events: >0 per-cpu cgroup events exist on this cpu
_free_event
perf_remove_from_ownerRemove user event from the owner task.
perf_event_release_kernelKill an event dead; while event:refcount will preserve the event* object, it will not preserve its functionality. Once the last 'user'* gives up the object, we'll destroy the thing.
__perf_event_read_value
perf_read_group
is_event_hup
perf_poll
perf_event_for_each_childHolding the top-level event's child_mutex means that any* descendant process that has inherited this event will block* in perf_event_exit_event() if it goes to exit, thus satisfying the* task existence requirements of perf_event_enable/disable.
perf_event_task_enable
perf_event_task_disable
perf_mmap_closeA buffer can be mmap()ed multiple times; either directly through the same* event, or through other events by use of perf_event_set_output().* In order to undo the VM accounting done by perf_mmap() we need to destroy
perf_mmap
swevent_hlist_put_cpu
swevent_hlist_get_cpu
swevent_hlist_get
perf_event_mux_interval_ms_store
perf_pmu_register
perf_pmu_unregister
account_event
perf_event_set_output
__perf_event_ctx_lock_doubleVariation on perf_event_ctx_lock_nested(), except we take two context* mutexes.
perf_event_create_kernel_counterperf_event_create_kernel_counter*@attr: attributes of the counter to create*@cpu: cpu in which the counter is bound*@task: task to profile (NULL for percpu)
perf_pmu_migrate_context
perf_event_exit_event
perf_event_exit_task_context
perf_event_exit_taskWhen a child task exits, feed back event values to parent events.* Can be called with cred_guard_mutex held when called from* install_exec_creds().
perf_free_event
perf_event_free_taskFree a context as created by inheritance by perf_event_init_task() below,* used by fork() in case of fail.* Even though the task has never lived, the context and events have been* exposed through the child_list, so we must take care tearing it all down.
inherit_eventInherit an event from parent task to child task.* Returns:* - valid pointer on success* - NULL for orphaned events* - IS_ERR() on error
perf_event_init_contextInitialize the perf_event context in task_struct
perf_swevent_init_cpu
perf_event_exit_cpu_context
perf_event_init_cpu
perf_event_sysfs_init
get_callchain_buffers
put_callchain_buffers
perf_event_max_stack_handlerUsed for sysctl_perf_event_max_stack and* sysctl_perf_event_max_contexts_per_stack.
reserve_bp_slot
release_bp_slot
modify_bp_slot
update_ref_ctr
put_uprobe
delayed_ref_ctr_inc@vma contains reference counter, not the probed instruction.
uprobe_mmapCalled from mmap_region/vma_adjust with mm->mmap_sem acquired.* Currently we ignore all errors and always return 0, the callers* can't handle the failure anyway.
uprobe_clear_stateprobe_clear_state - Free the area allocated for slots.
padata_set_cpumaskpadata_set_cpumask: Sets specified by @cpumask_type cpumask to the value* equivalent to @cpumask
padata_startpadata_start - start the parallel processing*@pinst: padata instance to start
padata_stoppadata_stop - stop the parallel processing*@pinst: padata instance to stop
show_cpumask
padata_alloc_shellpadata_alloc_shell - Allocate and initialize padata shell.*@pinst: Parent padata_instance object.
padata_free_shellpadata_free_shell - free a padata shell*@ps: padata shell to free
jump_label_unlock
torture_shuffle_task_registerRegister a task to be shuffled. If there is no memory, just splat* and don't bother registering.
torture_shuffle_task_unregister_allUnregister all tasks, for example, at the end of the torture run.
torture_shuffle_tasksShuffle tasks such that we allow shuffle_idle_cpu to become idle.* A special case is when shuffle_idle_cpu = -1, in which case we allow* the tasks to run on all CPUs.
torture_shutdown_notifyDetect and respond to a system shutdown.
torture_init_beginInitialize torture module
torture_init_endTell the torture module that initialization is complete.
torture_cleanup_beginClean up torture module
torture_cleanup_end
pagefault_out_of_memoryThe pagefault handler calls here because it is out of memory, so kill a* memory-hogging task. If oom_lock is held by somebody else, a parallel oom* killing is already in progress so do nothing.
sysctl_vm_numa_stat_handler
pcpu_allocpcpu_alloc - the percpu allocator*@size: size of area to allocate in bytes*@align: alignment of area (max PAGE_SIZE)*@reserved: allocate from the reserved chunk if available*@gfp: allocation flags* Allocate percpu area of @size bytes aligned at @align
pcpu_balance_workfnBalance work is used to populate or destroy chunks asynchronously. We* try to keep the number of populated free pages between* PCPU_EMPTY_POP_PAGES_LOW and HIGH for atomic allocations and at most one* empty chunk.
kmem_cache_create_usercopykmem_cache_create_usercopy - Create a cache with a region suitable* for copying to userspace*@name: A string which is used in /proc/slabinfo to identify this cache.*@size: The size of objects to be created in this cache.
slab_caches_to_rcu_destroy_workfn
kmem_cache_destroy删除高速缓存区
kmem_cache_shrink_allkmem_cache_shrink_all - shrink a cache and all memcg caches for root cache*@s: The cache pointer
slab_stop
dump_unreclaimable_slab
memcg_slab_stop
mm_drop_all_locksThe mmap_sem cannot be released by the caller until* mm_drop_all_locks() returns.
try_purge_vmap_area_lazyKick off a purge of the outstanding lazy areas. Don't bother if somebody* is already purging.
purge_vmap_area_lazy
_vm_unmap_aliases
s_stop
drain_all_pagesSpill all the per-cpu pages from all CPUs back into the buddy allocator.* When zone parameter is non-NULL, spill just the single zone's pages.* Note that this can be extremely slow as the draining happens in a workqueue.
__alloc_pages_may_oom
percpu_pagelist_fraction_sysctl_handlerpercpu_pagelist_fraction - changes the pcp->high for each zone on each* cpu. It is the fraction of total pages in each zone that a hot per cpu* pagelist can have before it gets flushed back to buddy allocator.
zone_pcp_updateThe zone indicated has a new number of managed_pages; batch sizes and percpu* page high values need to be recalulated.
SYSCALL_DEFINE1
swap_stop
SYSCALL_DEFINE2
deactivate_swap_slots_cache
reactivate_swap_slots_cache
reenable_swap_slots_cache_unlock
alloc_swap_slot_cache
drain_slots_cache_cpu
free_slot_cache
enable_swap_slots_cache
get_swap_page
show_pools
dma_pool_createdma_pool_create - Creates a pool of consistent memory blocks, for dma
dma_pool_destroydma_pool_destroy - destroys a pool of dma memory blocks.*@pool: dma pool that will be destroyed* Context: !in_interrupt()* Caller guarantees that no more memory from the pool is in use,* and that nothing will try to use the pool after this call.
hugetlb_no_page
hugetlb_fault
ksm_scan_thread
slab_memory_callback
kmem_cache_init_lateslab分配器后期初始化
cache_reapache_reap - Reclaim memory from caches
slabinfo_writeslabinfo_write - Tuning for the slab allocator*@file: unused*@buffer: user buffer*@count: data length*@ppos: unused* Return: %0 on success, negative error code otherwise.
slub_cpu_deadUse the cpu notifier to insure that the cpu slabs are flushed when* necessary.
slab_mem_going_offline_callback
slab_mem_offline_callback
slab_mem_going_online_callback
start_stop_khugepaged
memcg_alloc_shrinker_maps
memcg_expand_shrinker_maps
mem_cgroup_out_of_memory
drain_all_stockDrains all per-CPU charge caches for given root_memcg resp. subtree* of the hierarchy under it.
mem_cgroup_resize_max
memcg_update_kmem_max
memcg_update_tcp_max
__mem_cgroup_usage_register_event
__mem_cgroup_usage_unregister_event
vmpressure_event
vmpressure_register_eventvmpressure_register_event() - Bind vmpressure notifications to an eventfd*@memcg: memcg that is interested in vmpressure notifications*@eventfd: eventfd context to link notifications with*@args: event arguments (pressure level threshold, optional mode)*
vmpressure_unregister_eventvmpressure_unregister_event() - Unbind eventfd from vmpressure*@memcg: memcg handle*@eventfd: eventfd context that was used to link vmpressure with the @cg* This function does internal manipulations to detach the @eventfd from* the vmpressure
swap_cgroup_swapon
swap_cgroup_swapoff
hugetlb_cgroup_write
kmemleak_scan_threadThread function performing automatic memory scanning. Unreferenced objects* at the end of a memory scan are reported but only the first time.
kmemleak_seq_stopDecrement the use_count of the last object required, if any.
kmemleak_writeFile write operation to configure kmemleak at run-time
kmemleak_do_cleanupStop the memory scanning thread and free the kmemleak internal objects if* no previous scan thread (otherwise, kmemleak may still have some useful* information on memory leaks).
kmemleak_late_initLate initialization function.
cma_clear_bitmap
cma_allocma_alloc() - allocate pages from contiguous area*@cma: Contiguous memory region for which the allocation is performed
cma_used_get
cma_maxchunk_get
__mcopy_atomic_hugetlb__mcopy_atomic processing for HUGETLB vmas. Note that this routine is* called with mmap_sem held, it will release mmap_sem before returning.
bio_put_slab
__elevator_exit
elv_attr_show
elv_attr_store
blk_cleanup_queue释放请求队列
queue_attr_show
queue_attr_store
blk_register_queuelk_register_queue - register a block layer queue with sysfs*@disk: Disk of which the request queue should be registered with sysfs.
blk_unregister_queuelk_unregister_queue - counterpart of blk_register_queue()*@disk: Disk of which the request queue should be unregistered from sysfs.* Note: the caller is responsible for guaranteeing that this function is called* after blk_register_queue() has finished.
blk_freeze_queue_start
blk_mq_unfreeze_queue
blk_mq_del_queue_tag_set
blk_mq_add_queue_tag_set
blk_mq_realloc_hw_ctxs
blk_mq_elv_switch_noneCache the elevator_type in qe pair list and switch the* io scheduler to 'none'
blk_mq_elv_switch_back
blk_mq_update_nr_hw_queues
blk_mq_sysfs_show
blk_mq_sysfs_store
blk_mq_hw_sysfs_show
blk_mq_hw_sysfs_store
blk_mq_sysfs_unregister
blk_mq_sysfs_register
blkpg_ioctl
blkdev_reread_part
blkdev_show
register_blkdevgister_blkdev - register a new block device*@major: the requested major device number [1..BLKDEV_MAJOR_MAX-1]. If*@major = 0, try to allocate any unused major number.*@name: the name of the new block device as a zero terminated string
unregister_blkdev
disk_block_eventsdisk_block_events - block and flush disk event checking*@disk: disk to block events for* On return from this function, it is guaranteed that event checking* isn't in progress and won't happen until unblocked by* disk_unblock_events()
disk_events_set_dfl_poll_msecsThe default polling interval can be specified by the kernel* parameter block.events_dfl_poll_msecs which defaults to 0* (disable). This can also be modified runtime by writing to* /sys/module/block/parameters/events_dfl_poll_msecs.
disk_add_events
disk_del_events
init_emergency_isa_poolgets called "every" time someone init's a queue with BLK_BOUNCE_ISA* as the max address, so check if the pool has already been created.
bsg_put_device
bsg_get_device
bsg_unregister_queue
bsg_register_queue
blkcg_reset_stats
blkcg_css_free
blkcg_css_alloc
blkcg_bind
blkcg_policy_registerlkcg_policy_register - register a blkcg policy*@pol: blkcg policy to register* Register @pol with blkcg core. Might sleep and @pol may be modified on* successful registration. Returns 0 on success and -errno on failure.
blkcg_policy_unregisterlkcg_policy_unregister - unregister a blkcg policy*@pol: blkcg policy to unregister* Undo blkcg_policy_register(@pol). Might sleep.
hctx_tags_show
hctx_tags_bitmap_show
hctx_sched_tags_show
hctx_sched_tags_bitmap_show
check_opal_support
clean_opal_dev
opal_secure_erase_locking_range
opal_erase_locking_range
opal_enable_disable_shadow_mbr
opal_set_mbr_done
opal_write_shadow_mbr
opal_save
opal_add_user_to_lr
opal_reverttper
opal_lock_unlock
opal_take_ownership
opal_activate_lsp
opal_setup_locking_range
opal_set_new_pw
opal_activate_user
opal_unlock_from_suspend
opal_generic_read_write_table
key_reject_and_linkkey_reject_and_link - Negatively instantiate a key and link it into the keyring.*@key: The key to instantiate.*@timeout: The timeout on the negative key.*@error: The error to return when the key is hit.
__key_link_endFinish linking a key into to a keyring.* Must be called with __key_link_begin() having being called.
join_session_keyringJoin the named keyring as the session keyring if possible else attempt to* create a new one of that name and join that
construct_alloc_keyAllocate a new key in under-construction state and attempt to link it in to* the requested keyring.* May return a key that's already under construction instead if there was a* race between two thread calling request_key().
big_key_cryptEncrypt/decrypt big_key data
selinux_set_mnt_optsAllow filesystems with binary mount data to explicitly set mount point* labeling information.
selinux_sb_clone_mnt_opts
sel_open_policy
sel_write_load
sel_read_bool
sel_write_bool
sel_commit_bools_write
smk_ipv6_port_labelsmk_ipv6_port_label - Smack port access table management*@sock: socket*@address: address* Create or update the port list entry
smack_d_instantiatesmack_d_instantiate - Make sure the blob is correct on an inode*@opt_dentry: dentry where inode will be attached*@inode: the object* Set the inode's security blob if it hasn't been done already.
smk_import_entrysmk_import_entry - import a label, return the list entry*@string: a text string that might be a Smack label*@len: the maximum size, or zero if it is NULL terminated
smk_set_accesssmk_set_access - add a rule to the rule list or replace an old rule*@srp: the rule to add or replace*@rule_list: the list of rules*@rule_lock: the rule list lock* Looks through the current subject/object/access list for* the subject/object pair and
smk_set_cipsosmk_set_cipso - do the work for write() for cipso and cipso2*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start*@format: /smack/cipso or /smack/cipso2
smk_write_net4addrsmk_write_net4addr - write() for /smack/netlabel*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start* Accepts only one net4addr per write call
smk_write_net6addrsmk_write_net6addr - write() for /smack/netlabel*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start* Accepts only one net6addr per write call
smk_write_directsmk_write_direct - write() for /smack/direct*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start* Returns number of bytes written or error code, as appropriate
smk_write_mappedsmk_write_mapped - write() for /smack/mapped*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start* Returns number of bytes written or error code, as appropriate
smk_read_ambientsmk_read_ambient - read() for /smack/ambient*@filp: file pointer, not actually used*@buf: where to put the result*@cn: maximum to send along*@ppos: where to start* Returns number of bytes read or error code, as appropriate
smk_write_ambientsmk_write_ambient - write() for /smack/ambient*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start* Returns number of bytes written or error code, as appropriate
smk_write_onlycapsmk_write_onlycap - write() for smackfs/onlycap*@file: file pointer, not actually used*@buf: where to get the data from*@count: bytes sent*@ppos: where to start* Returns number of bytes written or error code, as appropriate
smk_write_revoke_subjsmk_write_revoke_subj - write() for /smack/revoke-subject*@file: file pointer*@buf: data from user space*@count: bytes sent*@ppos: where to start - must be 0
tomoyo_assign_profilemoyo_assign_profile - Create a new profile.*@ns: Pointer to "struct tomoyo_policy_namespace".*@profile: Profile number to create.* Returns pointer to "struct tomoyo_profile" on success, NULL otherwise.
tomoyo_delete_domainmoyo_delete_domain - Delete a domain.*@domainname: The name of domain.* Returns 0 on success, negative value otherwise.* Caller holds tomoyo_read_lock().
tomoyo_read_controlmoyo_read_control - read() for /sys/kernel/security/tomoyo/ interface.*@head: Pointer to "struct tomoyo_io_buffer".*@buffer: Poiner to buffer to write to.*@buffer_len: Size of @buffer.* Returns bytes read on success, negative value otherwise.
tomoyo_write_controlmoyo_write_control - write() for /sys/kernel/security/tomoyo/ interface.*@head: Pointer to "struct tomoyo_io_buffer".*@buffer: Pointer to buffer to read from.*@buffer_len: Size of @buffer.* Returns @buffer_len on success, negative value otherwise.
tomoyo_commit_conditionmoyo_commit_condition - Commit "struct tomoyo_condition".*@entry: Pointer to "struct tomoyo_condition".* Returns pointer to "struct tomoyo_condition" on success, NULL otherwise.* This function merges duplicated entries. This function returns NULL if
tomoyo_update_policymoyo_update_policy - Update an entry for exception policy.*@new_entry: Pointer to "struct tomoyo_acl_info".*@size: Size of @new_entry in bytes.*@param: Pointer to "struct tomoyo_acl_param".*@check_duplicate: Callback function to find duplicated entry.
tomoyo_update_domainmoyo_update_domain - Update an entry for domain policy.*@new_entry: Pointer to "struct tomoyo_acl_info".*@size: Size of @new_entry in bytes.*@param: Pointer to "struct tomoyo_acl_param".*@check_duplicate: Callback function to find duplicated entry.
tomoyo_assign_namespacemoyo_assign_namespace - Create a new namespace.*@domainname: Name of namespace to create.* Returns pointer to "struct tomoyo_policy_namespace" on success,* NULL otherwise.* Caller holds tomoyo_read_lock().
tomoyo_assign_domainmoyo_assign_domain - Create a domain or a namespace.*@domainname: The name of domain.*@transit: True if transit to domain found or created.* Returns pointer to "struct tomoyo_domain_info" on success, NULL otherwise.* Caller holds tomoyo_read_lock().
tomoyo_struct_used_by_io_buffermoyo_struct_used_by_io_buffer - Check whether the list element is used by /sys/kernel/security/tomoyo/ users or not.*@element: Pointer to "struct list_head".* Returns true if @element is used by /sys/kernel/security/tomoyo/ users,* false otherwise.
tomoyo_name_used_by_io_buffermoyo_name_used_by_io_buffer - Check whether the string is used by /sys/kernel/security/tomoyo/ users or not.*@string: String to check.* Returns true if @string is used by /sys/kernel/security/tomoyo/ users,* false otherwise.
tomoyo_try_to_gcmoyo_try_to_gc - Try to kfree() an entry.*@type: One of values in "enum tomoyo_policy_id".*@element: Pointer to "struct list_head".* Returns nothing.* Caller holds tomoyo_policy_lock mutex.
tomoyo_collect_entrymoyo_collect_entry - Try to kfree() deleted elements.* Returns nothing.
tomoyo_gc_threadmoyo_gc_thread - Garbage collector thread function.*@unused: Unused.* Returns 0.
tomoyo_get_groupmoyo_get_group - Allocate memory for "struct tomoyo_path_group"/"struct tomoyo_number_group".*@param: Pointer to "struct tomoyo_acl_param".*@idx: Index number.* Returns pointer to "struct tomoyo_group" on success, NULL otherwise.
tomoyo_get_namemoyo_get_name - Allocate permanent memory for string data.*@name: The string to store into the permernent memory.* Returns pointer to "struct tomoyo_path_info" on success, NULL otherwise.
ns_revision_read
ns_revision_poll
ns_mkdir_op
ns_rmdir_op
__aafs_ns_rmdirRequires: @ns->lock held
__aafs_ns_mkdirRequires: @ns->lock held
__next_ns__next_ns - find the next namespace to list*@root: root namespace to stop search at (NOT NULL)*@ns: current ns position (NOT NULL)* Find the next namespace from @ns under @root and handle all locking needed* while switching current namespace
p_stopp_stop - stop depth first traversal*@f: seq_file we are filling*@p: the last profile writen* Release all locking done by p_start/p_next on namespace tree
aa_create_aafsaa_create_aafs - create the apparmor security filesystem* dentries created here are released by aa_destroy_aafs* Returns: error on failure
aa_new_null_profileaa_new_null_profile - create or find a null-X learning profile*@parent: profile that caused this profile to be created (NOT NULL)*@hat: true if the null- learning profile is a hat*@base: name to base the null profile off of*@gfp: type of allocation
aa_replace_profilesaa_replace_profiles - replace profile(s) on the profile list*@policy_ns: namespace load is occurring on*@label: label that is attempting to load/replace policy*@mask: permission mask*@udata: serialized data stream (NOT NULL)* unpack and replace a profile
aa_remove_profilesaa_remove_profiles - remove profile(s) from the system*@policy_ns: namespace the remove is being done from*@subj: label attempting to remove policy*@fqname: name of the profile or namespace to remove (NOT NULL)*@size: size of the name* Remove a profile or
do_loaddata_freed to take the ns mutex lock which is NOT safe most places that* put_loaddata is called, so we have to delay freeing it
__aa_create_ns
aa_prepare_nsaa_prepare_ns - find an existing or create a new namespace of @name*@parent: ns to treat as parent*@name: the namespace to find or add (NOT NULL)* Returns: refcounted namespace or PTR_ERR if failed to create one
destroy_nsdestroy_ns - remove everything contained by @ns*@ns: namespace to have it contents removed (NOT NULL)
__aa_labelset_update_subtree__aa_labelset_udate_subtree - update all labels with a stale component*@ns: ns to start update at (NOT NULL)* Requires: @ns lock be held* Invalidates labels based on @p in @ns and any children namespaces.
handle_policy_update
safesetid_file_read
devcgroup_onlinedevcgroup_online - initializes devcgroup's behavior and exceptions based on* parent's*@css: css getting online* returns 0 in case of success, error code otherwise
devcgroup_offline
devcgroup_access_write
ima_write_policy
ima_add_template_entryAdd template entry to the measurement list and hash table, and* extend the pcr.* On systems which support carrying the IMA measurement list across* kexec, maintain the total memory size required for serializing the* binary_runtime_measurements.
ima_restore_measurement_entry
ima_check_last_writer
process_measurement
init_desc
get_tree_bdevget_tree_bdev - Get a superblock based on a single block device*@fc: The filesystem context holding the parameters*@fill_super: Helper to initialise a new superblock
mount_bdev
chrdev_show
__register_chrdev_regionRegister a single major with a specified minor range.* If major == 0 this function will dynamically allocate an unused major.* If major > 0 this function will attempt to reserve the range of minors* with given major.
__unregister_chrdev_region
prepare_bprm_credsPrepare credentials and lock ->cred_guard_mutex.* install_exec_creds() commits the new creds and drops the lock.* Or, if exec fails before, free_bprm() should release ->cred and* and unlock.
free_bprm
install_exec_credsstall the new credentials for this executable
pipe_unlock
__pipe_unlock
unlock_rename
__d_unaliasThis helper attempts to cope with remotely renamed directories* It assumes that the caller is already holding* Note: If ever the locking in lock_rename() changes, then please* remember to update this too...
__f_unlock_pos
SYSCALL_DEFINE3Create a kernel mount representation for a new, prepared superblock* (specified by fs_fd) and attach to an open_tree-like file descriptor.
seq_readseq_read - ->read() method for sequential files.*@file: the file to read from*@buf: the buffer to read to*@size: the maximum number of bytes to read*@ppos: the current position in the file* Ready-made ->f_op->read()
seq_lseekseq_lseek - ->llseek() method for sequential files.*@file: the file in question*@offset: new position*@whence: 0 for absolute, 1 for relative position* Ready-made ->f_op->llseek()
simple_attr_readad from the buffer that is filled with the get function
simple_attr_writerpret the buffer as a number to call the set function with
wait_sb_inodesThe @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks
fscontext_readAllow the user to read back any error, warning or informational messages.
freeze_bdevze_bdev -- lock a filesystem and force it into a consistent state*@bdev: blockdevice to lock* If a superblock is found on this device, we take the s_umount semaphore* on it to make sure nobody unmounts until the snapshot creation is done
thaw_bdevhaw_bdev -- unlock filesystem*@bdev: blockdevice to unlock*@sb: associated superblock* Unlocks the filesystem and marks it writeable again after freeze_bdev().
revalidate_diskvalidate_disk - wrapper for lower-level driver's revalidate_disk call-back*@disk: struct gendisk to be revalidated* This routine is a wrapper for lower-level driver's revalidate_disk* call-backs. It is used to do common pre and post operations needed
__blkdev_getd_mutex locking:* mutex_lock(part->bd_mutex)* mutex_lock_nested(whole->bd_mutex, 1)
blkdev_getlkdev_get - open a block device*@bdev: block_device to open*@mode: FMODE_* mask*@holder: exclusive holder identifier* Open @bdev with @mode. If @mode includes %FMODE_EXCL, @bdev is* open with exclusive access. Specifying %FMODE_EXCL with %NULL
__blkdev_put
blkdev_put
iterate_bdevs
fsnotify_destroy_mark
fsnotify_add_mark
fsnotify_clear_marks_by_groupClear any marks in a group with given type mask
show_fdinfo
dnotify_flushCalled every time a file is closed. Looks first for a dnotify mark on the* inode. If one is found run all of the ->dn structures attached to that* mark for one relevant to this process closing the file and remove that* dnotify_struct
fcntl_dirnotifyWhen a process calls fcntl to attach a dnotify watch to a directory it ends* up here. Allocate both a mark for fsnotify to add and a dnotify_struct to be* attached to the fsnotify_mark.
inotify_update_watch
fanotify_remove_mark
fanotify_add_mark
ep_scan_ready_listp_scan_ready_list - Scans the ready list in a way that makes possible for* the scan code, to call f_op->poll(). Also allows for* O(NumReady) performance.*@ep: Pointer to the epoll private data structure.*@sproc: Pointer to the scan callback.
ep_free
ep_show_fdinfo
eventpoll_release_fileThis is called from eventpoll_release() to unlink files from the eventpoll* interface. We need to have this facility to cleanup correctly files that are* closed without being removed from the eventpoll interface.
ep_loop_check_procp_loop_check_proc - Callback function to be passed to the @ep_call_nested()* API, to verify that adding an epoll file inside another* epoll structure, does not violate the constraints, in* terms of closed loops, or too deep chains (which can
SYSCALL_DEFINE4The following function implements the controller interface for* the eventpoll file that enables the insertion/removal/change of* file descriptors inside the interest set.
aio_migratepage
ioctx_allocx_alloc* Allocates and initializes an ioctx. Returns an ERR_PTR if it failed.
aio_read_events_ringaio_read_events_ring* Pull an event off of the ioctx's event ring. Returns the number of* events fetched
io_iopoll_reap_eventsWe can't just wait for polled events to come to us, we have to actively* find and complete them.
io_iopoll_check
io_issue_sqe
io_sq_thread
io_ring_ctx_wait_and_kill
SYSCALL_DEFINE6
__io_uring_register
SYSCALL_DEFINE4
fscrypt_initializescrypt_initialize() - allocate major buffers for fs encryption
add_master_key
wait_on_dquotEnd of list functions needing dq_list_lock
dquot_acquireRead dquot from disk and alloc space for it
dquot_commitWrite dquot to disk
dquot_releaseRelease dquot
get_dcookieThis is the main kernel-side routine that retrieves the cookie* value for a dentry/vfsmnt pair.
do_lookup_dcookieAnd here is where the userspace process can look up the cookie value* to retrieve the path.
dcookie_register
dcookie_unregister
install_ldt
mm_access
oom_killer_disablem_killer_disable - disable OOM killer*@timeout: maximum timeout to wait for oom victims in jiffies* Forces all page allocations to fail rather than trigger OOM killer
exp_funnel_lockFunnel-lock acquisition for expedited grace periods
rcu_exp_wait_wakeWait for the current expedited grace period to complete, and then* wake up everyone who piggybacked on the just-completed expedited* grace period. Also update all the ->exp_seq_rq counters as needed* in order to avoid counter-wrap problems.
synchronize_rcu_expeditedsynchronize_rcu_expedited - Brute-force RCU grace period* Wait for an RCU grace period, but expedite it
bio_find_or_create_slab
__key_instantiate_and_linkInstantiate a key and link it into the target keyring atomically. Must be* called with the target keyring's semaphore writelocked. The target key's* semaphore need not be locked as instantiation is serialised by* key_construction_mutex.