Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\slob.c Create Date:2022-07-28 15:36:03
Last Modify:2022-05-20 09:26:42 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:slob_alloc: entry point into the slob allocator.

Proto:static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, int align_offset)

Type:void

Parameter:

TypeParameterName
size_tsize
gfp_tgfp
intalign
intnode
intalign_offset
306  slob_t * b = NULL
310  If size < All partially free slob pages go on these lists. Then slob_list = free_slob_small
312  Else if size < SLOB_BREAK2 Then slob_list = free_slob_medium
314  Else slob_list = free_slob_large
317  spin_lock_irqsave( & slob_lock protects all slob allocator structures., flags)
320  bool page_removed_from_list = false
326  If node != NUMA_NO_NODE && page_to_nid(sp) != node Then Continue
330  If SLOB < SLOB_UNITS(size) Then Continue
333  b = slob_page_alloc() - Allocate a slob block within a given slob_page sp.*@sp: Page to look in.*@size: Size of the allocation.*@align: Allocation alignment.*@align_offset: Offset in the allocated block that will be aligned.
334  If Not b Then Continue
343  If Not page_removed_from_list Then
349  If Not list_is_first -- tests whether @list is the first entry in list @head*@list: the entry to test*@head: the head of the list Then list_rotate_to_front() - Rotate list to specific item.*@list: The desired new front of the list.*@head: The head of the list.* Rotates list so that @list becomes the new front of the list.
352  Break
354  spin_unlock_irqrestore( & slob_lock protects all slob allocator structures., flags)
357  If Not b Then
358  b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node)
359  If Not b Then Return NULL
361  sp = virt_to_page(kaddr) returns a valid pointer if and only if* virt_addr_valid(kaddr) returns true.(b)
362  __SetPageSlab(sp)
364  spin_lock_irqsave( & slob_lock protects all slob allocator structures., flags)
365  SLOB = SLOB_UNITS(PAGE_SIZE)
366  first free object = b
367  Initialization list head
368  Encode the given size and next info into a free slob block s.
369  set_slob_page_free(sp, slob_list)
370  b = slob_page_alloc() - Allocate a slob block within a given slob_page sp.*@sp: Page to look in.*@size: Size of the allocation.*@align: Allocation alignment.*@align_offset: Offset in the allocated block that will be aligned.
371  BUG_ON(!b)
372  spin_unlock_irqrestore( & slob_lock protects all slob allocator structures., flags)
374  If Value for the false possibility is greater at compile time(gfp & __GFP_ZERO) Then memset(b, 0, size)
376  Return b
Caller
NameDescribe
__do_kmalloc_nodeEnd of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
slob_alloc_node