Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\filemap.c Create Date:2022-07-28 14:02:44
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:filemap_map_pages

Proto:void filemap_map_pages(struct vm_fault *vmf, unsigned long start_pgoff, unsigned long end_pgoff)

Type:void

Parameter:

TypeParameterName
struct vm_fault *vmf
unsigned longstart_pgoff
unsigned longend_pgoff
2605  file = File we map to (can be NULL).
2606  mapping = f_mapping
2607  last_pgoff = start_pgoff
2609  XA_STATE() - Declare an XArray operation state.*@name: Name of this operation state (usually xas).*@array: Array to operate on.*@index: Initial index of interest.* Declare and initialise an xa_state on the stack.(xas, & i_pages, start_pgoff)
2612  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
2614  If xas_retry() - Retry the operation if appropriate.*@xas: XArray operation state.*@entry: Entry from xarray.* The advanced functions may sometimes return an internal entry, such as* a retry entry or a zero entry. This function sets up the @xas to restart Then Continue
2616  If xa_is_value() - Determine if an entry is a value.*@entry: XArray entry.* Context: Any context.* Return: True if the entry is a value, false if it is a pointer. Then Go to next
2623  If PageLocked(page) Then Go to next
2625  If Not page_cache_get_speculative(page) Then Go to next
2629  If Value for the false possibility is greater at compile time(page != xas_reload() - Refetch an entry from the xarray) Then Go to skip
2631  page = find_subpage(page, xa_index)
2633  If Not PageUptodate(page) || PageReadahead(page) || PageHWPoison(page) Then Go to skip
2637  If Not Return true if the page was successfully locked Then Go to skip
2640  If See page-flags.h for PAGE_MAPPING_FLAGS != mapping || Not PageUptodate(page) Then Go to unlock
2643  max_idx = DIV_ROUND_UP(NOTE: in a 32bit arch with a preemptable kernel and* an UP compile the i_size_read/write must be atomic* with respect to the local cpu (unlike with preempt disabled),* but they don't need to be atomic with respect to other cpus like in* true SMP (so they , PAGE_SIZE)
2644  If Our offset within mapping. >= max_idx Then Go to unlock
2647  If Cache miss stat for mmap accesses > 0 Then Cache miss stat for mmap accesses --
2650  Faulting virtual address += xa_index - last_pgoff << PAGE_SHIFT determines the page size
2651  If Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. Then Pointer to pte entry matching* the 'address'. NULL if the page* table hasn't been allocated. += xa_index - last_pgoff
2653  last_pgoff = xa_index
2654  If alloc_set_pte(vmf, NULL, page) Then Go to unlock
2656  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
2657  Go to next
2658  unlock :
2659  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
2660  :
2661  put_page(page)
2662  :
2664  If pmd_trans_huge( * Pointer to pmd entry matching* the 'address' ) Then Break
2667  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()