page_migration 7.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175
  1. Page migration
  2. --------------
  3. Page migration allows the moving of the physical location of pages between
  4. nodes in a numa system while the process is running. This means that the
  5. virtual addresses that the process sees do not change. However, the
  6. system rearranges the physical location of those pages.
  7. The main intend of page migration is to reduce the latency of memory access
  8. by moving pages near to the processor where the process accessing that memory
  9. is running.
  10. Page migration allows a process to manually relocate the node on which its
  11. pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
  12. a new memory policy via mbind(). The pages of process can also be relocated
  13. from another process using the sys_migrate_pages() function call. The
  14. migrate_pages function call takes two sets of nodes and moves pages of a
  15. process that are located on the from nodes to the destination nodes.
  16. Page migration functions are provided by the numactl package by Andi Kleen
  17. (a version later than 0.9.3 is required. Get it from
  18. ftp://ftp.suse.com/pub/people/ak). numactl provided libnuma which
  19. provides an interface similar to other numa functionality for page migration.
  20. cat /proc/<pid>/numa_maps allows an easy review of where the pages of
  21. a process are located. See also the numa_maps manpage in the numactl package.
  22. Manual migration is useful if for example the scheduler has relocated
  23. a process to a processor on a distant node. A batch scheduler or an
  24. administrator may detect the situation and move the pages of the process
  25. nearer to the new processor. At some point in the future we may have
  26. some mechanism in the scheduler that will automatically move the pages.
  27. Larger installations usually partition the system using cpusets into
  28. sections of nodes. Paul Jackson has equipped cpusets with the ability to
  29. move pages when a task is moved to another cpuset (See ../cpusets.txt).
  30. Cpusets allows the automation of process locality. If a task is moved to
  31. a new cpuset then also all its pages are moved with it so that the
  32. performance of the process does not sink dramatically. Also the pages
  33. of processes in a cpuset are moved if the allowed memory nodes of a
  34. cpuset are changed.
  35. Page migration allows the preservation of the relative location of pages
  36. within a group of nodes for all migration techniques which will preserve a
  37. particular memory allocation pattern generated even after migrating a
  38. process. This is necessary in order to preserve the memory latencies.
  39. Processes will run with similar performance after migration.
  40. Page migration occurs in several steps. First a high level
  41. description for those trying to use migrate_pages() from the kernel
  42. (for userspace usage see the Andi Kleen's numactl package mentioned above)
  43. and then a low level description of how the low level details work.
  44. A. In kernel use of migrate_pages()
  45. -----------------------------------
  46. 1. Remove pages from the LRU.
  47. Lists of pages to be migrated are generated by scanning over
  48. pages and moving them into lists. This is done by
  49. calling isolate_lru_page().
  50. Calling isolate_lru_page increases the references to the page
  51. so that it cannot vanish while the page migration occurs.
  52. It also prevents the swapper or other scans to encounter
  53. the page.
  54. 2. Generate a list of newly allocates page. These pages will contain the
  55. contents of the pages from the first list after page migration is
  56. complete.
  57. 3. The migrate_pages() function is called which attempts
  58. to do the migration. It returns the moved pages in the
  59. list specified as the third parameter and the failed
  60. migrations in the fourth parameter. The first parameter
  61. will contain the pages that could still be retried.
  62. 4. The leftover pages of various types are returned
  63. to the LRU using putback_to_lru_pages() or otherwise
  64. disposed of. The pages will still have the refcount as
  65. increased by isolate_lru_pages() if putback_to_lru_pages() is not
  66. used! The kernel may want to handle the various cases of failures in
  67. different ways.
  68. B. How migrate_pages() works
  69. ----------------------------
  70. migrate_pages() does several passes over its list of pages. A page is moved
  71. if all references to a page are removable at the time. The page has
  72. already been removed from the LRU via isolate_lru_page() and the refcount
  73. is increased so that the page cannot be freed while page migration occurs.
  74. Steps:
  75. 1. Lock the page to be migrated
  76. 2. Insure that writeback is complete.
  77. 3. Make sure that the page has assigned swap cache entry if
  78. it is an anonyous page. The swap cache reference is necessary
  79. to preserve the information contain in the page table maps while
  80. page migration occurs.
  81. 4. Prep the new page that we want to move to. It is locked
  82. and set to not being uptodate so that all accesses to the new
  83. page immediately lock while the move is in progress.
  84. 5. All the page table references to the page are either dropped (file
  85. backed pages) or converted to swap references (anonymous pages).
  86. This should decrease the reference count.
  87. 6. The radix tree lock is taken. This will cause all processes trying
  88. to reestablish a pte to block on the radix tree spinlock.
  89. 7. The refcount of the page is examined and we back out if references remain
  90. otherwise we know that we are the only one referencing this page.
  91. 8. The radix tree is checked and if it does not contain the pointer to this
  92. page then we back out because someone else modified the mapping first.
  93. 9. The mapping is checked. If the mapping is gone then a truncate action may
  94. be in progress and we back out.
  95. 10. The new page is prepped with some settings from the old page so that
  96. accesses to the new page will be discovered to have the correct settings.
  97. 11. The radix tree is changed to point to the new page.
  98. 12. The reference count of the old page is dropped because the radix tree
  99. reference is gone.
  100. 13. The radix tree lock is dropped. With that lookups become possible again
  101. and other processes will move from spinning on the tree lock to sleeping on
  102. the locked new page.
  103. 14. The page contents are copied to the new page.
  104. 15. The remaining page flags are copied to the new page.
  105. 16. The old page flags are cleared to indicate that the page does
  106. not use any information anymore.
  107. 17. Queued up writeback on the new page is triggered.
  108. 18. If swap pte's were generated for the page then replace them with real
  109. ptes. This will reenable access for processes not blocked by the page lock.
  110. 19. The page locks are dropped from the old and new page.
  111. Processes waiting on the page lock can continue.
  112. 20. The new page is moved to the LRU and can be scanned by the swapper
  113. etc again.
  114. TODO list
  115. ---------
  116. - Page migration requires the use of swap handles to preserve the
  117. information of the anonymous page table entries. This means that swap
  118. space is reserved but never used. The maximum number of swap handles used
  119. is determined by CHUNK_SIZE (see mm/mempolicy.c) per ongoing migration.
  120. Reservation of pages could be avoided by having a special type of swap
  121. handle that does not require swap space and that would only track the page
  122. references. Something like that was proposed by Marcelo Tosatti in the
  123. past (search for migration cache on lkml or linux-mm@kvack.org).
  124. - Page migration unmaps ptes for file backed pages and requires page
  125. faults to reestablish these ptes. This could be optimized by somehow
  126. recording the references before migration and then reestablish them later.
  127. However, there are several locking challenges that have to be overcome
  128. before this is possible.
  129. - Page migration generates read ptes for anonymous pages. Dirty page
  130. faults are required to make the pages writable again. It may be possible
  131. to generate a pte marked dirty if it is known that the page is dirty and
  132. that this process has the only reference to that page.
  133. Christoph Lameter, March 8, 2006.