Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:lib\sbitmap.c Create Date:2022-07-28 07:22:57
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:sbitmap_queue_show

Proto:void sbitmap_queue_show(struct sbitmap_queue *sbq, struct seq_file *m)

Type:void

Parameter:

TypeParameterName
struct sbitmap_queue *sbq
struct seq_file *m
615  sbitmap_show( & @sb: Scalable bitmap., m)
617  seq_puts(m, "alloc_hint={")
618  first = true
619  for_each_possible_cpu(i)
620  If Not first Then seq_puts(m, ", ")
622  first = false
623  seq_printf(m, "%u", * per_cpu_ptr(@alloc_hint: Cache of last successfully allocated or freed bit.* This is per-cpu, which allows multiple users to stick to different* cachelines until the map is exhausted., i))
625  seq_puts(m, "}\n")
627  seq_printf(m, "wake_batch=%u\n", @wake_batch: Number of bits which must be freed before we wake up any* waiters.)
628  seq_printf(m, "wake_index=%d\n", atomic_read( & @wake_index: Next wait queue in @ws to wake up.))
629  seq_printf(m, "ws_active=%d\n", atomic_read( & @ws_active: count of currently active ws waitqueues))
631  seq_puts(m, "ws={\n")
632  When i < SBQ_WAIT_QUEUES cycle
633  ws = @ws: Wait queues.[i]
635  seq_printf(m, "\t{.wait_cnt=%d, .wait=%s},\n", atomic_read( & @wait_cnt: Number of frees remaining before we wake up.), waitqueue_active -- locklessly test for waiters on the queue*@wq_head: the waitqueue to test for waiters* returns true if the wait list is not empty* NOTE: this function is lockless and requires care, incorrect usage _will_ ? "active" : "inactive")
639  seq_puts(m, "}\n")
641  seq_printf(m, "round_robin=%d\n", @round_robin: Allocate bits in strict round-robin order.)
642  seq_printf(m, "min_shallow_depth=%u\n", @min_shallow_depth: The minimum shallow depth which may be passed to* sbitmap_queue_get_shallow() or __sbitmap_queue_get_shallow().)
Caller
NameDescribe
blk_mq_debugfs_tags_show