Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\swapfile.c Create Date:2022-07-28 15:18:36
Last Modify:2020-03-17 22:19:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:If the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false

Proto:int try_to_unuse(unsigned int type, bool frontswap, unsigned long pages_to_unuse)

Type:int

Parameter:

TypeParameterName
unsigned inttype
boolfrontswap
unsigned longpages_to_unuse
2129  retval = 0
2130  si = swap_info[type]
2135  If Not umber of those currently in use Then Return 0
2138  If Not frontswap Then pages_to_unuse = 0
2141  retry :
2142  retval = shmem_unuse(type, frontswap, & pages_to_unuse)
2143  If retval Then Go to out
2146  prev_mm = For dynamically allocated mm_structs, there is a dynamically sized cpumask* at the end of the structure, the size of which depends on the maximum CPU* number the system can see
2147  mmget() - Pin the address space associated with a &struct mm_struct.*@mm: The address space to pin.* Make sure that the address space of the given &struct mm_struct doesn't* go away. This does not protect against parts of the address space being
2149  spin_lock( & mmlist_lock)
2150  p = List of maybe swapped mm's. These * are globally strung together off * init_mm.mmlist, and are protected * by mmlist_lock
2151  When umber of those currently in use && Not signal_pending(current process) && (p = next) != List of maybe swapped mm's. These * are globally strung together off * init_mm.mmlist, and are protected * by mmlist_lock cycle
2155  mm = list_entry - get the struct for this entry*@ptr: the &struct list_head pointer.*@type: the type of the struct this is embedded in.*@member: the name of the list_head within the struct.(p, structmm_struct, mmlist)
2156  If Not mmget_not_zero(mm) Then Continue
2158  spin_unlock( & mmlist_lock)
2159  Decrement the use count and release all resources for an mm.
2160  prev_mm = mm
2161  retval = unuse_mm(mm, type, frontswap, & pages_to_unuse)
2163  If retval Then
2165  Go to out
2172  cond_resched()
2173  spin_lock( & mmlist_lock)
2175  spin_unlock( & mmlist_lock)
2177  Decrement the use count and release all resources for an mm.
2179  i = 0
2180  When umber of those currently in use && Not signal_pending(current process) && (i = Scan swap_map (or frontswap_map if frontswap parameter is true)* from current position to next entry still in use. Return 0* if there are no inuse entries after prev till end of the map.) != 0 cycle
2184  entry = Store a type+offset into a swp_entry_t in an arch-independent format
2185  page = d_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned with an increased refcount.
2186  If Not page Then Continue
2195  lock_page may only be called if we have the page's inode pinned.
2196  Wait for a page to complete writeback
2197  If swap is getting full, or if there are no more mappings of this page,* then try_to_free_swap is called to free its swap space.
2198  lock_page - unlock a locked page*@page: the page* Unlocks the page and wakes up sleepers in ___wait_on_page_locked().* Also wakes sleepers in wait_on_page_writeback() because the wakeup* mechanism between PageLocked pages and PageWriteback pages is shared.
2199  Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
2206  If pages_to_unuse && --pages_to_unuse == 0 Then Go to out
2222  If umber of those currently in use Then
2223  If Not signal_pending(current process) Then Go to retry
2225  retval = -EINTR
2227  out :
2228  Return If retval == Return code to denote that requested number of* frontswap pages are unused(moved to page cache).* Used in in shmem_unuse and try_to_unuse. Then 0 Else retval
Caller
NameDescribe
SYSCALL_DEFINE1
frontswap_shrinkFrontswap, like a true swap device, may unnecessarily retain pages* under certain circumstances; "shrink" frontswap is essentially a* "partial swapoff" and works by calling try_to_unuse to attempt to* unuse enough frontswap pages to attempt to -- subject