Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:42:37
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases

Proto:vm_fault_t do_swap_page(struct vm_fault *vmf)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_fault *vmf
2904  vma = Target VMA
2905  page = NULL
2910  exclusive = 0
2911  ret = 0
2913  If Not handle_pte_fault chooses page fault handler according to an entry which was* read non-atomically Then Go to out
2916  entry = Convert the arch-dependent pte representation of a swp_entry_t into an* arch-independent swp_entry_t.
2917  If Value for the false possibility is greater at compile time(non_swap_entry(entry)) Then
2918  If is_migration_entry(entry) Then
2921  Else if is_device_private_entry(entry) Then
2923  ret = migrate_to_ram(vmf)
2924  Else if is_hwpoison_entry(entry) Then
2926  Else
2928  ret = VM_FAULT_SIGBUS
2930  Go to out
2934  delayacct_set_flag(I am doing a swapin )
2935  page = lookup_swap_cache(entry, vma, Faulting virtual address )
2936  swapcache = page
2938  If Not page Then
2939  si = swp_swap_info(entry)
2946  If page Then
2953  Else
2956  swapcache = page
2959  If Not page Then
2969  Go to unlock
2973  ret = VM_FAULT_MAJOR
2974  Disable counters
2975  count_memcg_event_mm(The address space we belong to. , PGMAJFAULT)
2976  Else if PageHWPoison(page) Then
2981  ret = VM_FAULT_HWPOISON
2982  delayacct_clear_flag(I am doing a swapin )
2983  Go to out_release
2986  locked = lock_page_or_retry - Lock the page, unless this would block and the* caller indicated that it can handle a retry.* Return value and mmap_sem implications depend on flags; see* __lock_page_or_retry().
2988  delayacct_clear_flag(I am doing a swapin )
2989  If Not locked Then
2990  ret |= VM_FAULT_RETRY
2991  Go to out_release
3000  If Value for the false possibility is greater at compile time((!PageSwapCache(page) || page_private(page) != val)) && swapcache Then Go to out_page
3004  page = ksm_might_need_to_copy(page, vma, Faulting virtual address )
3005  If Value for the false possibility is greater at compile time(!page) Then
3006  ret = VM_FAULT_OOM
3007  page = swapcache
3008  Go to out_page
3011  If mem_cgroup_try_charge_delay(page, The address space we belong to. , GFP_KERNEL, & memcg, false) Then
3013  ret = VM_FAULT_OOM
3014  Go to out_page
3020  Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. = pte_offset_map_lock(The address space we belong to. , Pointer to pmd entry matching* the 'address' , Faulting virtual address , & Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3022  If Value for the false possibility is greater at compile time(!pte_same( * Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Value of PTE at the time of fault )) Then Go to out_nomap
3025  If Value for the false possibility is greater at compile time(!PageUptodate(page)) Then
3026  ret = VM_FAULT_SIGBUS
3027  Go to out_nomap
3040  inc_mm_counter_fast(The address space we belong to. , MM_ANONPAGES)
3041  dec_mm_counter_fast(The address space we belong to. , MM_SWAPENTS)
3042  pte = Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(page, Access permissions of this VMA. )
3043  If FAULT_FLAG_xxx flags & Fault was a write access && reuse_swap_page(page, NULL) Then
3044  pte = Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when* servicing faults for write access. In the normal case, do always want* pte_mkwrite. But get_user_pages can cause write faults for mappings
3045  FAULT_FLAG_xxx flags &= ~Fault was a write access
3046  ret |= VM_FAULT_WRITE
3047  exclusive = flags for do_page_add_anon_rmap()
3049  flush_icache_page(vma, page)
3050  If pte_swp_soft_dirty(Value of PTE at the time of fault ) Then pte = pte_mksoft_dirty(pte)
3052  set_pte_at(The address space we belong to. , Faulting virtual address , Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., pte)
3053  Some architectures support metadata associated with a page
3054  Value of PTE at the time of fault = pte
3057  If Value for the false possibility is greater at compile time(page != swapcache && swapcache) Then
3058  page_add_new_anon_rmap(page, vma, Faulting virtual address , false)
3059  mem_cgroup_commit_charge(page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
3060  lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability
3061  Else
3062  do_page_add_anon_rmap(page, vma, Faulting virtual address , exclusive)
3063  mem_cgroup_commit_charge(page, memcg, true, false)
3064  activate_page(page)
3067  swap_free(entry)
3068  If mem_cgroup_swap_full(page) || Flags, see mm.h. & VM_LOCKED || PageMlocked(page) Then try_to_free_swap(page)
3071  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
3072  If page != swapcache && swapcache Then
3081  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
3082  put_page(swapcache)
3085  If FAULT_FLAG_xxx flags & Fault was a write access Then
3086  ret |= This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been
3087  If ret & VM_FAULT_ERROR Then ret &= VM_FAULT_ERROR
3089  Go to out
3093  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
3094  unlock :
3095  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3096  out :
3097  Return ret
3098  out_nomap :
3099  mem_cgroup_cancel_charge(page, memcg, false)
3100  pte_unmap_unlock(Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated., Page table lock.* Protects pte page table if 'pte'* is not NULL, otherwise pmd.)
3101  out_page :
3102  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
3103  out_release :
3104  put_page(page)
3105  If page != swapcache && swapcache Then
3106  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
3107  put_page(swapcache)
3109  Return ret
Caller
NameDescribe
handle_pte_faultThese routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures)
__collapse_huge_page_swapinBring missing pages in from swap, to complete THP collapse.* Only done if khugepaged_scan_pmd believes it is worthwhile.* Called and returns without pte mapped or spinlocks held,* but with mmap_sem held to protect against vma changes.