vm.txt 6.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180
  1. Documentation for /proc/sys/vm/* kernel version 2.2.10
  2. (c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
  3. For general info and legal blurb, please look in README.
  4. ==============================================================
  5. This file contains the documentation for the sysctl files in
  6. /proc/sys/vm and is valid for Linux kernel version 2.2.
  7. The files in this directory can be used to tune the operation
  8. of the virtual memory (VM) subsystem of the Linux kernel and
  9. the writeout of dirty data to disk.
  10. Default values and initialization routines for most of these
  11. files can be found in mm/swap.c.
  12. Currently, these files are in /proc/sys/vm:
  13. - overcommit_memory
  14. - page-cluster
  15. - dirty_ratio
  16. - dirty_background_ratio
  17. - dirty_expire_centisecs
  18. - dirty_writeback_centisecs
  19. - max_map_count
  20. - min_free_kbytes
  21. - laptop_mode
  22. - block_dump
  23. - drop-caches
  24. - zone_reclaim_mode
  25. - zone_reclaim_interval
  26. ==============================================================
  27. dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
  28. dirty_writeback_centisecs, vfs_cache_pressure, laptop_mode,
  29. block_dump, swap_token_timeout, drop-caches:
  30. See Documentation/filesystems/proc.txt
  31. ==============================================================
  32. overcommit_memory:
  33. This value contains a flag that enables memory overcommitment.
  34. When this flag is 0, the kernel attempts to estimate the amount
  35. of free memory left when userspace requests more memory.
  36. When this flag is 1, the kernel pretends there is always enough
  37. memory until it actually runs out.
  38. When this flag is 2, the kernel uses a "never overcommit"
  39. policy that attempts to prevent any overcommit of memory.
  40. This feature can be very useful because there are a lot of
  41. programs that malloc() huge amounts of memory "just-in-case"
  42. and don't use much of it.
  43. The default value is 0.
  44. See Documentation/vm/overcommit-accounting and
  45. security/commoncap.c::cap_vm_enough_memory() for more information.
  46. ==============================================================
  47. overcommit_ratio:
  48. When overcommit_memory is set to 2, the committed address
  49. space is not permitted to exceed swap plus this percentage
  50. of physical RAM. See above.
  51. ==============================================================
  52. page-cluster:
  53. The Linux VM subsystem avoids excessive disk seeks by reading
  54. multiple pages on a page fault. The number of pages it reads
  55. is dependent on the amount of memory in your machine.
  56. The number of pages the kernel reads in at once is equal to
  57. 2 ^ page-cluster. Values above 2 ^ 5 don't make much sense
  58. for swap because we only cluster swap data in 32-page groups.
  59. ==============================================================
  60. max_map_count:
  61. This file contains the maximum number of memory map areas a process
  62. may have. Memory map areas are used as a side-effect of calling
  63. malloc, directly by mmap and mprotect, and also when loading shared
  64. libraries.
  65. While most applications need less than a thousand maps, certain
  66. programs, particularly malloc debuggers, may consume lots of them,
  67. e.g., up to one or two maps per allocation.
  68. The default value is 65536.
  69. ==============================================================
  70. min_free_kbytes:
  71. This is used to force the Linux VM to keep a minimum number
  72. of kilobytes free. The VM uses this number to compute a pages_min
  73. value for each lowmem zone in the system. Each lowmem zone gets
  74. a number of reserved free pages based proportionally on its size.
  75. ==============================================================
  76. percpu_pagelist_fraction
  77. This is the fraction of pages at most (high mark pcp->high) in each zone that
  78. are allocated for each per cpu page list. The min value for this is 8. It
  79. means that we don't allow more than 1/8th of pages in each zone to be
  80. allocated in any single per_cpu_pagelist. This entry only changes the value
  81. of hot per cpu pagelists. User can specify a number like 100 to allocate
  82. 1/100th of each zone to each per cpu page list.
  83. The batch value of each per cpu pagelist is also updated as a result. It is
  84. set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
  85. The initial value is zero. Kernel does not use this value at boot time to set
  86. the high water marks for each per cpu page list.
  87. ===============================================================
  88. zone_reclaim_mode:
  89. Zone_reclaim_mode allows to set more or less agressive approaches to
  90. reclaim memory when a zone runs out of memory. If it is set to zero then no
  91. zone reclaim occurs. Allocations will be satisfied from other zones / nodes
  92. in the system.
  93. This is value ORed together of
  94. 1 = Zone reclaim on
  95. 2 = Zone reclaim writes dirty pages out
  96. 4 = Zone reclaim swaps pages
  97. 8 = Also do a global slab reclaim pass
  98. zone_reclaim_mode is set during bootup to 1 if it is determined that pages
  99. from remote zones will cause a measurable performance reduction. The
  100. page allocator will then reclaim easily reusable pages (those page
  101. cache pages that are currently not used) before allocating off node pages.
  102. It may be beneficial to switch off zone reclaim if the system is
  103. used for a file server and all of memory should be used for caching files
  104. from disk. In that case the caching effect is more important than
  105. data locality.
  106. Allowing zone reclaim to write out pages stops processes that are
  107. writing large amounts of data from dirtying pages on other nodes. Zone
  108. reclaim will write out dirty pages if a zone fills up and so effectively
  109. throttle the process. This may decrease the performance of a single process
  110. since it cannot use all of system memory to buffer the outgoing writes
  111. anymore but it preserve the memory on other nodes so that the performance
  112. of other processes running on other nodes will not be affected.
  113. Allowing regular swap effectively restricts allocations to the local
  114. node unless explicitly overridden by memory policies or cpuset
  115. configurations.
  116. It may be advisable to allow slab reclaim if the system makes heavy
  117. use of files and builds up large slab caches. However, the slab
  118. shrink operation is global, may take a long time and free slabs
  119. in all nodes of the system.
  120. ================================================================
  121. zone_reclaim_interval:
  122. The time allowed for off node allocations after zone reclaim
  123. has failed to reclaim enough pages to allow a local allocation.
  124. Time is set in seconds and set by default to 30 seconds.
  125. Reduce the interval if undesired off node allocations occur. However, too
  126. frequent scans will have a negative impact onoff node allocation performance.