Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:include\linux\mm.h Create Date:2022-07-28 05:43:30
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:page_to_nid

Proto:static inline int page_to_nid(const struct page *page)

Type:int

Parameter:

TypeParameterName
const struct page *page
1085  p = page
1087  Return (({
1087  size of PagePoisoned(p)
1087  })-> Atomic flags, some possibly * updated asynchronously >> NODES_PGSHIFT) & NODES_MASK
Caller
NameDescribe
reclaim_pages
list_lru_add
list_lru_del
new_non_cma_page
do_numa_page
change_pte_range
show_numa_info
move_freepagesMove the free pages in a range to the free lists of the requested type.* Note that start_page and end_pages are not aligned on a pageblock* boundary. If alignment is required, use move_freepages_block()
enqueue_huge_page
update_and_free_page
__free_huge_page
alloc_fresh_huge_pageCommon helper to allocate a fresh hugetlb page. All specific allocators* should use this function to get new hugetlb pages
dissolve_free_huge_pageDissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
alloc_surplus_huge_pageAllocates a fresh surplus page from the page allocator.
gather_bootmem_preallocPut bootmem huge pages into the standard lists after mem_map is up
move_hugetlb_state
queue_pages_requiredCheck if the page's nid is in qp->nmask.* If MPOL_MF_INVERT is set in qp->flags, check if the nid is* in the invert of qp->nmask.
lookup_node
alloc_page_interleaveAllocate a page in interleaved policy.Own path because it needs to do special accounting.
mpol_misplacedmpol_misplaced - check whether current page node is valid in policy*@page: page to be checked*@vma: vm area where page mapped*@addr: virtual address where page mapped* Lookup current policy node id for vma,addr and "compare to" page's* node id
slob_allocslob_alloc: entry point into the slob allocator.
unstable_tree_search_insertstable_tree_search_insert - search for identical page,* else insert rmap_item into the unstable tree.* This function searches for a page in the unstable tree identical to the* page currently being scanned; and if no identical page is found in the
cache_free_pfmemalloc
cache_free_alien
cache_grow_beginGrow (by 1) the number of slabs within a cache. This is called by* kmem_cache_alloc() when there are no active objs left in a cache.
cache_grow_end
fallback_allocFallback function if there was no memory available and no objects on a* certain node and fall back is permitted. First we scan all the* available node for available objects. If that fails then we* perform an allocation without specifying a node
allocate_slab
discard_slab
deactivate_slabRemove the cpu slab
node_matchCheck if the objects in a per cpu structure fit numa* locality expectations.
__slab_freeSlow path handling. This may still be called frequently since objects* have a longer lifetime than the cpu slabs in most processing loads.* So we still attempt to reduce cache line usage. Just take the slab* lock and free the item
early_kmem_cache_node_allocNo kmalloc_node yet so do it by hand. We know that this is the first* slab on the node for this slabcache. There are no concurrent accesses* possible.* Note that this function only works on the kmem_cache_node* when allocating for the kmem_cache_node
add_page_for_migrationResolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist
do_pages_stat_arrayDetermine the nodes of an array of pages and store it in an array of status.
get_deferred_split_queue
do_huge_pmd_wp_page_fallback
do_huge_pmd_numa_pageNUMA hinting page fault entry point for trans huge pmds
split_huge_page_to_listThis function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked.
deferred_split_huge_page
khugepaged_scan_pmd
mem_cgroup_page_nodeinfo
soft_limit_tree_from_page
mem_cgroup_try_charge_delay
shake_pageWhen a unknown page type is encountered drain as many buffers as possible* in the hope to turn the page into a LRU or free page, which we can handle.
new_page
kmemleak_scanScan data sections and all the referenced memory blocks allocated via the* kernel's standard allocators. This function must be called with the* scan_mutex held.
lookup_page_ext