Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\events\uprobes.c Create Date:2022-07-28 13:44:39
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:__replace_page - replace page in vma by new page

Proto:static int __replace_page(struct vm_area_struct *vma, unsigned long addr, struct page *old_page, struct page *new_page)

Type:int

Parameter:

TypeParameterName
struct vm_area_struct *vma
unsigned longaddr
struct page *old_page
struct page *new_page
157  mm = The address space we belong to.
158  struct page_vma_mapped_walk pvmw = {page = compound_head(old_page), vma = vma, address = addr, }
167  mmu_notifier_range_init( & range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, addr + PAGE_SIZE)
170  If new_page Then
171  err = mem_cgroup_try_charge(new_page, The address space we belong to. , GFP_KERNEL, & memcg, false)
173  If err Then Return err
178  lock_page may only be called if we have the page's inode pinned.
180  mmu_notifier_invalidate_range_start( & range)
181  err = -EAGAIN
182  If Not page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma Then
183  If new_page Then mem_cgroup_cancel_charge(new_page, memcg, false)
185  Go to unlock
187  VM_BUG_ON_PAGE(addr != address, old_page)
189  If new_page Then
190  get_page(new_page)
191  page_add_new_anon_rmap(new_page, vma, addr, false)
192  mem_cgroup_commit_charge(new_page, memcg, TSC's on different sockets may be reset asynchronously.* This may cause the TSC ADJUST value on socket 0 to be NOT 0., false)
193  lru_cache_add_active_or_unevictable(new_page, vma)
194  Else dec_mm_counter(mm, MM_ANONPAGES)
198  If Not PageAnon(old_page) Then
199  dec_mm_counter(mm, Optimized variant when page is already known not to be PageAnon )
200  inc_mm_counter(mm, MM_ANONPAGES)
203  flush_cache_page(vma, addr, pte_pfn( * pte))
204  ptep_clear_flush_notify(vma, addr, pte)
205  If new_page Then set_pte_at_notify() sets the pte _after_ running the notifier.* This is safe to start by updating the secondary MMUs, because the primary MMU* pte invalidate must have already happened with a ptep_clear_flush() before* set_pte_at_notify() has been invoked(mm, addr, pte, Conversion functions: convert a page and protection to a page entry,* and a page entry and page directory to the page they refer to.* (Currently stuck as a macro because of indirect forward reference* to linux/mm.h:page_to_nid())(new_page, Access permissions of this VMA. ))
209  page_remove_rmap(old_page, false)
210  If Not page_mapped(old_page) Then try_to_free_swap(old_page)
212  page_vma_mapped_walk_done( & pvmw)
214  If Flags, see mm.h. & VM_LOCKED Then munlock_vma_page(old_page)
216  put_page(old_page)
218  err = 0
219  unlock :
220  mmu_notifier_invalidate_range_end( & range)
221  unlock_page(old_page)
222  Return err
Caller
NameDescribe
uprobe_write_opcodeNOTE:* Expect the breakpoint instruction to be the smallest size instruction for* the architecture