Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\migrate.c Create Date:2022-07-28 15:58:28
Last Modify:2022-05-20 09:53:13 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Replace the page in the mapping.* The number of remaining references must be:* 1 for anonymous pages without a mapping* 2 for pages with a mapping* 3 for pages with a mapping and PagePrivate/PagePrivate2 set.

Proto:int migrate_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page, int extra_count)

Type:int

Parameter:

TypeParameterName
struct address_space *mapping
struct page *newpage
struct page *page
intextra_count
400  XA_STATE() - Declare an XArray operation state.*@name: Name of this operation state (usually xas).*@array: Array to operate on.*@index: Initial index of interest.* Declare and initialise an xa_state on the stack.(xas, & i_pages, Return the pagecache index of the passed page. Regular pagecache pages* use ->index whereas swapcache pages use swp_offset(->private))
403  expected_count = expected_page_refs(mapping, page) + extra_count
405  If Not mapping Then
407  If page_count(page) != expected_count Then Return -EAGAIN
411  Our offset within mapping. = Our offset within mapping.
412  See page-flags.h for PAGE_MAPPING_FLAGS = See page-flags.h for PAGE_MAPPING_FLAGS
413  If PageSwapBacked(page) Then __SetPageSwapBacked(newpage)
416  Return Return values from addresss_space_operations.migratepage():* - negative errno on page migration failure;* - zero on page migration success;
419  oldzone = page_zone(page)
420  newzone = page_zone(newpage)
422  xas_lock_irq( & xas)
423  If page_count(page) != expected_count || xas_load() - Load an entry from the XArray (advanced).*@xas: XArray operation state.* Usually walks the @xas to the appropriate state to load the entry* stored at xa_index. However, it will do nothing and return %NULL if*@xas is in an error state != page Then
424  xas_unlock_irq( & xas)
425  Return -EAGAIN
428  If Not page_ref_freeze(page, expected_count) Then
429  xas_unlock_irq( & xas)
430  Return -EAGAIN
437  Our offset within mapping. = Our offset within mapping.
438  See page-flags.h for PAGE_MAPPING_FLAGS = See page-flags.h for PAGE_MAPPING_FLAGS
439  page_ref_add(newpage, hpage_nr_pages(page))
440  If PageSwapBacked(page) Then
441  __SetPageSwapBacked(newpage)
442  If PageSwapCache(page) Then
446  Else
447  VM_BUG_ON_PAGE(PageSwapCache(page), page)
451  dirty = PageDirty(page)
452  If dirty Then
453  ClearPageDirty(page)
454  SetPageDirty(newpage)
457  xas_store() - Store this entry in the XArray
458  If PageHuge() only returns true for hugetlbfs pages, but not for* normal or transparent huge pages.* PageTransHuge() returns true for both transparent huge and* hugetlbfs pages, but not normal pages. PageTransHuge() can only be Then
461  When i < HPAGE_PMD_NR cycle
472  page_ref_unfreeze(page, expected_count - hpage_nr_pages(page))
474  xas_unlock( & xas)
487  If newzone != oldzone Then
488  __dec_node_state(zone_pgdat, NR_FILE_PAGES)
489  __inc_node_state(zone_pgdat, NR_FILE_PAGES)
490  If PageSwapBacked(page) && Not PageSwapCache(page) Then
501  The local_irq_*() APIs are equal to the raw_local_irq*()* if !TRACE_IRQFLAGS.()
503  Return Return values from addresss_space_operations.migratepage():* - negative errno on page migration failure;* - zero on page migration success;
Caller
NameDescribe
migrate_pageCommon logic to directly migrate a single LRU page suitable for* pages that do not use PagePrivate/PagePrivate2.* Pages are locked upon entry and exit.
__buffer_migrate_page
iomap_migrate_page