函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-27 16:10:51
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:These routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)

函数原型:static vm_fault_t handle_pte_fault(struct vm_fault *vmf)

返回类型:vm_fault_t

参数:

类型参数名称
struct vm_fault *vmf
3976  如果此条件成立可能性小(为编译器优化)(pmd_none( * Pointer to pmd entry matching* the 'address' ))则
3983  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = NULL
3984  否则
3986  如果The ordering of these checks is important for pmds with _PAGE_DEVMAP set.* If we check pmd_trans_unstable() first we will trip the bad_pmd() check* inside of pmd_none_or_trans_huge_or_clear_bad(). This will end up correctly则返回:0
3994  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.等于pte_offset_map(Pointer to pmd entry matching* the 'address' , Faulting virtual address )
3995  Value of PTE at the time of fault 等于Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.
4005  The "volatile" is due to gcc bugs ()
4012  如果非Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated.
4013  如果vma_is_anonymous(Target VMA )则返回:We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked.
4015  否则返回:We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults)
4019  如果非pte_present(Value of PTE at the time of fault )则返回:We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases
4022  如果Technically a PTE can be PROTNONE even when not doing NUMA balancing but* the only case the kernel cares is for NUMA balancing and is only ever set* when the VMA is accessible. For PROT_NONE VMAs, the PTEs are not markedvma_is_accessible(Target VMA )则返回:do_numa_page(vmf)
4025  Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.等于pte_lockptr(The address space we belong to. , Pointer to pmd entry matching* the 'address' )
4026  加自旋锁
4027  entry等于Value of PTE at the time of fault
4028  如果此条件成立可能性小(为编译器优化)(!pte_same( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry))则转到:unlock
4030  如果FAULT_FLAG_xxx flags 按位与Fault was a write access
4031  如果非pte_write(entry)则返回:This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been
4033  entry等于pte_mkdirty(entry)
4035  entry等于pte_mkyoung(entry)
4036  如果ptep_set_access_flags(Target VMA , Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., entry, FAULT_FLAG_xxx flags & Fault was a write access )则
4038  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
4039  否则
4046  如果FAULT_FLAG_xxx flags 按位与Fault was a write access flush_tlb_fix_spurious_fault(Target VMA , Faulting virtual address )
4049  unlock :
4050  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
4051  返回:0
调用者
名称描述
__handle_mm_faultBy the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().