函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:arch\x86\include\asm\pgtable.h Create Date:2022-07-27 06:58:52
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:pte_none

函数原型:static inline int pte_none(pte_t pte)

返回类型:int

参数:

类型参数名称
pte_tpte
721  返回:非pte按位与_PAGE_KNL_ERRATUM_MASK的反的值
调用者
名称描述
ioremap_pte_range
follow_page_pte
get_gate_page
copy_pte_range
zap_pte_range
insert_pageThis is the old fallback for page remapping.* For historical reasons, it only allows reserved pages. Only* old drivers should use this, and they needed to mark their* pages reserved for the old functions anyway.
insert_pfn
remap_pte_rangemaps a range of physical memory into the requested pages. the old* mappings are removed. any references to nonexistent pages results* in null mappings (currently treated as "copy-on-access")
apply_to_pte_range
do_anonymous_pageWe enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked.
alloc_set_ptealloc_set_pte - setup new PTE entry for given page and add reverse page* mapping
do_fault_arounddo_fault_around() tries to map few pages around the fault address. The hope* is that the pages will be needed soon and this will lower the number of* faults to handle.* It uses vm_ops->map_pages() to map the pages, which skips the page if it's
do_faultWe enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults)
handle_pte_faultThese routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)
mincore_pte_range
move_ptes
page_vma_mapped_walkpage_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma
vunmap_pte_rangePage table manipulation functions **
vmap_pte_range
madvise_cold_or_pageout_pte_range
madvise_free_pte_range
swap_vma_readaheadswap_vma_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin.* Primitive swap readahead code
vmemmap_pte_populate
shadow_mapped
kasan_free_pte
do_huge_pmd_wp_page_fallback
__split_huge_zero_page_pmd
__split_huge_pmd_locked
release_pte_pages
__collapse_huge_page_isolate
__collapse_huge_page_copy
khugepaged_scan_pmd
get_mctgt_typeget_mctgt_type - get target type of moving charge*@vma: the vma the pte to be checked belongs*@addr: the address corresponding to the pte to be checked*@ptent: the pte to be checked*@target: the pointer the target page or swap ent will be stored(can be
mcopy_atomic_pte
mfill_zeropage_pte
pte_to_hmm_pfn_flags
hmm_vma_handle_pte
userfaultfd_must_waitVerify the pagetables are still not ok after having reigstered into* the fault_pending_wqh to avoid userland having to UFFDIO_WAKE any* userfault that has already been resolved, if userfaultfd_read and* UFFDIO_COPY|ZEROPAGE are being run simultaneously on
huge_pte_none
is_swap_pteheck whether a pte points to a swap entry