cache.txt 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268
  1. Introduction
  2. ============
  3. dm-cache is a device mapper target written by Joe Thornber, Heinz
  4. Mauelshagen, and Mike Snitzer.
  5. It aims to improve performance of a block device (eg, a spindle) by
  6. dynamically migrating some of its data to a faster, smaller device
  7. (eg, an SSD).
  8. This device-mapper solution allows us to insert this caching at
  9. different levels of the dm stack, for instance above the data device for
  10. a thin-provisioning pool. Caching solutions that are integrated more
  11. closely with the virtual memory system should give better performance.
  12. The target reuses the metadata library used in the thin-provisioning
  13. library.
  14. The decision as to what data to migrate and when is left to a plug-in
  15. policy module. Several of these have been written as we experiment,
  16. and we hope other people will contribute others for specific io
  17. scenarios (eg. a vm image server).
  18. Glossary
  19. ========
  20. Migration - Movement of the primary copy of a logical block from one
  21. device to the other.
  22. Promotion - Migration from slow device to fast device.
  23. Demotion - Migration from fast device to slow device.
  24. The origin device always contains a copy of the logical block, which
  25. may be out of date or kept in sync with the copy on the cache device
  26. (depending on policy).
  27. Design
  28. ======
  29. Sub-devices
  30. -----------
  31. The target is constructed by passing three devices to it (along with
  32. other parameters detailed later):
  33. 1. An origin device - the big, slow one.
  34. 2. A cache device - the small, fast one.
  35. 3. A small metadata device - records which blocks are in the cache,
  36. which are dirty, and extra hints for use by the policy object.
  37. This information could be put on the cache device, but having it
  38. separate allows the volume manager to configure it differently,
  39. e.g. as a mirror for extra robustness. This metadata device may only
  40. be used by a single cache device.
  41. Fixed block size
  42. ----------------
  43. The origin is divided up into blocks of a fixed size. This block size
  44. is configurable when you first create the cache. Typically we've been
  45. using block sizes of 256KB - 1024KB. The block size must be between 64
  46. (32KB) and 2097152 (1GB) and a multiple of 64 (32KB).
  47. Having a fixed block size simplifies the target a lot. But it is
  48. something of a compromise. For instance, a small part of a block may be
  49. getting hit a lot, yet the whole block will be promoted to the cache.
  50. So large block sizes are bad because they waste cache space. And small
  51. block sizes are bad because they increase the amount of metadata (both
  52. in core and on disk).
  53. Cache operating modes
  54. ---------------------
  55. The cache has three operating modes: writeback, writethrough and
  56. passthrough.
  57. If writeback, the default, is selected then a write to a block that is
  58. cached will go only to the cache and the block will be marked dirty in
  59. the metadata.
  60. If writethrough is selected then a write to a cached block will not
  61. complete until it has hit both the origin and cache devices. Clean
  62. blocks should remain clean.
  63. If passthrough is selected, useful when the cache contents are not known
  64. to be coherent with the origin device, then all reads are served from
  65. the origin device (all reads miss the cache) and all writes are
  66. forwarded to the origin device; additionally, write hits cause cache
  67. block invalidates. Passthrough mode allows a cache device to be
  68. activated without having to worry about coherency. Coherency that
  69. exists is maintained, although the cache will gradually cool as writes
  70. take place. If the coherency of the cache can later be verified, or
  71. established, the cache device can can be transitioned to writethrough or
  72. writeback mode while still warm. Otherwise, the cache contents can be
  73. discarded prior to transitioning to the desired operating mode.
  74. A simple cleaner policy is provided, which will clean (write back) all
  75. dirty blocks in a cache. Useful for decommissioning a cache.
  76. Migration throttling
  77. --------------------
  78. Migrating data between the origin and cache device uses bandwidth.
  79. The user can set a throttle to prevent more than a certain amount of
  80. migration occurring at any one time. Currently we're not taking any
  81. account of normal io traffic going to the devices. More work needs
  82. doing here to avoid migrating during those peak io moments.
  83. For the time being, a message "migration_threshold <#sectors>"
  84. can be used to set the maximum number of sectors being migrated,
  85. the default being 204800 sectors (or 100MB).
  86. Updating on-disk metadata
  87. -------------------------
  88. On-disk metadata is committed every time a REQ_SYNC or REQ_FUA bio is
  89. written. If no such requests are made then commits will occur every
  90. second. This means the cache behaves like a physical disk that has a
  91. write cache (the same is true of the thin-provisioning target). If
  92. power is lost you may lose some recent writes. The metadata should
  93. always be consistent in spite of any crash.
  94. The 'dirty' state for a cache block changes far too frequently for us
  95. to keep updating it on the fly. So we treat it as a hint. In normal
  96. operation it will be written when the dm device is suspended. If the
  97. system crashes all cache blocks will be assumed dirty when restarted.
  98. Per-block policy hints
  99. ----------------------
  100. Policy plug-ins can store a chunk of data per cache block. It's up to
  101. the policy how big this chunk is, but it should be kept small. Like the
  102. dirty flags this data is lost if there's a crash so a safe fallback
  103. value should always be possible.
  104. For instance, the 'mq' policy, which is currently the default policy,
  105. uses this facility to store the hit count of the cache blocks. If
  106. there's a crash this information will be lost, which means the cache
  107. may be less efficient until those hit counts are regenerated.
  108. Policy hints affect performance, not correctness.
  109. Policy messaging
  110. ----------------
  111. Policies will have different tunables, specific to each one, so we
  112. need a generic way of getting and setting these. Device-mapper
  113. messages are used. Refer to cache-policies.txt.
  114. Discard bitset resolution
  115. -------------------------
  116. We can avoid copying data during migration if we know the block has
  117. been discarded. A prime example of this is when mkfs discards the
  118. whole block device. We store a bitset tracking the discard state of
  119. blocks. However, we allow this bitset to have a different block size
  120. from the cache blocks. This is because we need to track the discard
  121. state for all of the origin device (compare with the dirty bitset
  122. which is just for the smaller cache device).
  123. Target interface
  124. ================
  125. Constructor
  126. -----------
  127. cache <metadata dev> <cache dev> <origin dev> <block size>
  128. <#feature args> [<feature arg>]*
  129. <policy> <#policy args> [policy args]*
  130. metadata dev : fast device holding the persistent metadata
  131. cache dev : fast device holding cached data blocks
  132. origin dev : slow device holding original data blocks
  133. block size : cache unit size in sectors
  134. #feature args : number of feature arguments passed
  135. feature args : writethrough. (The default is writeback.)
  136. policy : the replacement policy to use
  137. #policy args : an even number of arguments corresponding to
  138. key/value pairs passed to the policy
  139. policy args : key/value pairs passed to the policy
  140. E.g. 'sequential_threshold 1024'
  141. See cache-policies.txt for details.
  142. Optional feature arguments are:
  143. writethrough : write through caching that prohibits cache block
  144. content from being different from origin block content.
  145. Without this argument, the default behaviour is to write
  146. back cache block contents later for performance reasons,
  147. so they may differ from the corresponding origin blocks.
  148. A policy called 'default' is always registered. This is an alias for
  149. the policy we currently think is giving best all round performance.
  150. As the default policy could vary between kernels, if you are relying on
  151. the characteristics of a specific policy, always request it by name.
  152. Status
  153. ------
  154. <#used metadata blocks>/<#total metadata blocks> <#read hits> <#read misses>
  155. <#write hits> <#write misses> <#demotions> <#promotions> <#blocks in cache>
  156. <#dirty> <#features> <features>* <#core args> <core args>* <#policy args>
  157. <policy args>*
  158. #used metadata blocks : Number of metadata blocks used
  159. #total metadata blocks : Total number of metadata blocks
  160. #read hits : Number of times a READ bio has been mapped
  161. to the cache
  162. #read misses : Number of times a READ bio has been mapped
  163. to the origin
  164. #write hits : Number of times a WRITE bio has been mapped
  165. to the cache
  166. #write misses : Number of times a WRITE bio has been
  167. mapped to the origin
  168. #demotions : Number of times a block has been removed
  169. from the cache
  170. #promotions : Number of times a block has been moved to
  171. the cache
  172. #blocks in cache : Number of blocks resident in the cache
  173. #dirty : Number of blocks in the cache that differ
  174. from the origin
  175. #feature args : Number of feature args to follow
  176. feature args : 'writethrough' (optional)
  177. #core args : Number of core arguments (must be even)
  178. core args : Key/value pairs for tuning the core
  179. e.g. migration_threshold
  180. #policy args : Number of policy arguments to follow (must be even)
  181. policy args : Key/value pairs
  182. e.g. 'sequential_threshold 1024
  183. Messages
  184. --------
  185. Policies will have different tunables, specific to each one, so we
  186. need a generic way of getting and setting these. Device-mapper
  187. messages are used. (A sysfs interface would also be possible.)
  188. The message format is:
  189. <key> <value>
  190. E.g.
  191. dmsetup message my_cache 0 sequential_threshold 1024
  192. Invalidation is removing an entry from the cache without writing it
  193. back. Cache blocks can be invalidated via the invalidate_cblocks
  194. message, which takes an arbitrary number of cblock ranges.
  195. invalidate_cblocks [<cblock>|<cblock begin>-<cblock end>]*
  196. E.g.
  197. dmsetup message my_cache 0 invalidate_cblocks 2345 3456-4567 5678-6789
  198. Examples
  199. ========
  200. The test suite can be found here:
  201. https://github.com/jthornber/device-mapper-test-suite
  202. dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \
  203. /dev/mapper/ssd /dev/mapper/origin 512 1 writeback default 0'
  204. dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \
  205. /dev/mapper/ssd /dev/mapper/origin 1024 1 writeback \
  206. mq 4 sequential_threshold 1024 random_threshold 8'