Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\hugetlb.c Create Date:2022-07-28 15:29:03
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:hugetlb_fault

Proto:vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags)

Type:vm_fault_t

Parameter:

TypeParameterName
struct mm_struct *mm
struct vm_area_struct *vma
unsigned longaddress
unsigned intflags
4006  struct page * page = NULL
4007  struct page * pagecache_page = NULL
4008  h = hstate_vma(vma)
4010  need_wait_lock = 0
4011  haddr = address & huge_page_mask(h)
4013  ptep = huge_pte_offset(mm, haddr, huge_page_size(h))
4014  If ptep Then
4015  entry = huge_ptep_get(ptep)
4018  Return 0
4019  Else if Value for the false possibility is greater at compile time(is_hugetlb_entry_hwpoisoned(entry)) Then Return VM_FAULT_HWPOISON_LARGE | Encode hstate index for a hwpoisoned large page (hstate_index(h))
4022  Else
4023  ptep = arch callbacks
4024  If Not ptep Then Return VM_FAULT_OOM
4028  mapping = f_mapping
4029  idx = Convert the address within this vma to the page offset within* the mapping, in pagecache page units; huge pages here.
4036  hash = For uniprocesor systems we always use a single mutex, so just* return 0 and avoid the hashing overhead.
4037  mutex_lock( & hugetlb_fault_mutex_table[hash])
4039  entry = huge_ptep_get(ptep)
4040  If huge_pte_none(entry) Then
4041  ret = hugetlb_no_page(mm, vma, mapping, idx, address, ptep, flags)
4042  Go to out_mutex
4045  ret = 0
4054  If Not pte_present(entry) Then Go to out_mutex
4065  If flags & Fault was a write access && Not huge_pte_write(entry) Then
4066  If vma_needs_reservation(h, vma, haddr) < 0 Then
4067  ret = VM_FAULT_OOM
4068  Go to out_mutex
4071  vma_end_reservation(h, vma, haddr)
4073  If Not (Flags, see mm.h. & VM_MAYSHARE) Then pagecache_page = Return the pagecache page at a given address within a VMA
4078  ptl = huge_pte_lock(h, mm, ptep)
4081  If Value for the false possibility is greater at compile time(!pte_same(entry, huge_ptep_get(ptep))) Then Go to out_ptl
4089  page = pte_page(entry)
4090  If page != pagecache_page Then If Not Return true if the page was successfully locked Then
4092  need_wait_lock = 1
4093  Go to out_ptl
4096  get_page(page)
4098  If flags & Fault was a write access Then
4099  If Not huge_pte_write(entry) Then
4102  Go to out_put_page
4104  entry = huge_pte_mkdirty(entry)
4106  entry = pte_mkyoung(entry)
4107  If huge_ptep_set_access_flags(vma, haddr, ptep, entry, flags & Fault was a write access ) Then The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
4110  out_put_page :
4111  If page != pagecache_page Then lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
4113  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
4114  out_ptl :
4115  spin_unlock(ptl)
4117  If pagecache_page Then
4118  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
4119  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
4121  out_mutex :
4122  mutex_unlock - release the mutex*@lock: the mutex to be released* Unlock a mutex that has been locked by this task previously.* This function must not be used in interrupt context. Unlocking* of a not locked mutex is not allowed.
4130  If need_wait_lock Then Wait for a page to be unlocked.* This must be called with the caller "holding" the page,* ie with increased "page->count" so that the page won't* go away during the wait..
4132  Return ret
Caller
NameDescribe
follow_hugetlb_page
handle_mm_faultBy the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry().