vfs.txt 28 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671
  1. /* -*- auto-fill -*- */
  2. Overview of the Virtual File System
  3. Richard Gooch <rgooch@atnf.csiro.au>
  4. 5-JUL-1999
  5. Conventions used in this document <section>
  6. =================================
  7. Each section in this document will have the string "<section>" at the
  8. right-hand side of the section title. Each subsection will have
  9. "<subsection>" at the right-hand side. These strings are meant to make
  10. it easier to search through the document.
  11. NOTE that the master copy of this document is available online at:
  12. http://www.atnf.csiro.au/~rgooch/linux/docs/vfs.txt
  13. What is it? <section>
  14. ===========
  15. The Virtual File System (otherwise known as the Virtual Filesystem
  16. Switch) is the software layer in the kernel that provides the
  17. filesystem interface to userspace programs. It also provides an
  18. abstraction within the kernel which allows different filesystem
  19. implementations to co-exist.
  20. A Quick Look At How It Works <section>
  21. ============================
  22. In this section I'll briefly describe how things work, before
  23. launching into the details. I'll start with describing what happens
  24. when user programs open and manipulate files, and then look from the
  25. other view which is how a filesystem is supported and subsequently
  26. mounted.
  27. Opening a File <subsection>
  28. --------------
  29. The VFS implements the open(2), stat(2), chmod(2) and similar system
  30. calls. The pathname argument is used by the VFS to search through the
  31. directory entry cache (dentry cache or "dcache"). This provides a very
  32. fast look-up mechanism to translate a pathname (filename) into a
  33. specific dentry.
  34. An individual dentry usually has a pointer to an inode. Inodes are the
  35. things that live on disc drives, and can be regular files (you know:
  36. those things that you write data into), directories, FIFOs and other
  37. beasts. Dentries live in RAM and are never saved to disc: they exist
  38. only for performance. Inodes live on disc and are copied into memory
  39. when required. Later any changes are written back to disc. The inode
  40. that lives in RAM is a VFS inode, and it is this which the dentry
  41. points to. A single inode can be pointed to by multiple dentries
  42. (think about hardlinks).
  43. The dcache is meant to be a view into your entire filespace. Unlike
  44. Linus, most of us losers can't fit enough dentries into RAM to cover
  45. all of our filespace, so the dcache has bits missing. In order to
  46. resolve your pathname into a dentry, the VFS may have to resort to
  47. creating dentries along the way, and then loading the inode. This is
  48. done by looking up the inode.
  49. To look up an inode (usually read from disc) requires that the VFS
  50. calls the lookup() method of the parent directory inode. This method
  51. is installed by the specific filesystem implementation that the inode
  52. lives in. There will be more on this later.
  53. Once the VFS has the required dentry (and hence the inode), we can do
  54. all those boring things like open(2) the file, or stat(2) it to peek
  55. at the inode data. The stat(2) operation is fairly simple: once the
  56. VFS has the dentry, it peeks at the inode data and passes some of it
  57. back to userspace.
  58. Opening a file requires another operation: allocation of a file
  59. structure (this is the kernel-side implementation of file
  60. descriptors). The freshly allocated file structure is initialised with
  61. a pointer to the dentry and a set of file operation member functions.
  62. These are taken from the inode data. The open() file method is then
  63. called so the specific filesystem implementation can do it's work. You
  64. can see that this is another switch performed by the VFS.
  65. The file structure is placed into the file descriptor table for the
  66. process.
  67. Reading, writing and closing files (and other assorted VFS operations)
  68. is done by using the userspace file descriptor to grab the appropriate
  69. file structure, and then calling the required file structure method
  70. function to do whatever is required.
  71. For as long as the file is open, it keeps the dentry "open" (in use),
  72. which in turn means that the VFS inode is still in use.
  73. All VFS system calls (i.e. open(2), stat(2), read(2), write(2),
  74. chmod(2) and so on) are called from a process context. You should
  75. assume that these calls are made without any kernel locks being
  76. held. This means that the processes may be executing the same piece of
  77. filesystem or driver code at the same time, on different
  78. processors. You should ensure that access to shared resources is
  79. protected by appropriate locks.
  80. Registering and Mounting a Filesystem <subsection>
  81. -------------------------------------
  82. If you want to support a new kind of filesystem in the kernel, all you
  83. need to do is call register_filesystem(). You pass a structure
  84. describing the filesystem implementation (struct file_system_type)
  85. which is then added to an internal table of supported filesystems. You
  86. can do:
  87. % cat /proc/filesystems
  88. to see what filesystems are currently available on your system.
  89. When a request is made to mount a block device onto a directory in
  90. your filespace the VFS will call the appropriate method for the
  91. specific filesystem. The dentry for the mount point will then be
  92. updated to point to the root inode for the new filesystem.
  93. It's now time to look at things in more detail.
  94. struct file_system_type <section>
  95. =======================
  96. This describes the filesystem. As of kernel 2.1.99, the following
  97. members are defined:
  98. struct file_system_type {
  99. const char *name;
  100. int fs_flags;
  101. struct super_block *(*read_super) (struct super_block *, void *, int);
  102. struct file_system_type * next;
  103. };
  104. name: the name of the filesystem type, such as "ext2", "iso9660",
  105. "msdos" and so on
  106. fs_flags: various flags (i.e. FS_REQUIRES_DEV, FS_NO_DCACHE, etc.)
  107. read_super: the method to call when a new instance of this
  108. filesystem should be mounted
  109. next: for internal VFS use: you should initialise this to NULL
  110. The read_super() method has the following arguments:
  111. struct super_block *sb: the superblock structure. This is partially
  112. initialised by the VFS and the rest must be initialised by the
  113. read_super() method
  114. void *data: arbitrary mount options, usually comes as an ASCII
  115. string
  116. int silent: whether or not to be silent on error
  117. The read_super() method must determine if the block device specified
  118. in the superblock contains a filesystem of the type the method
  119. supports. On success the method returns the superblock pointer, on
  120. failure it returns NULL.
  121. The most interesting member of the superblock structure that the
  122. read_super() method fills in is the "s_op" field. This is a pointer to
  123. a "struct super_operations" which describes the next level of the
  124. filesystem implementation.
  125. struct super_operations <section>
  126. =======================
  127. This describes how the VFS can manipulate the superblock of your
  128. filesystem. As of kernel 2.1.99, the following members are defined:
  129. struct super_operations {
  130. void (*read_inode) (struct inode *);
  131. int (*write_inode) (struct inode *, int);
  132. void (*put_inode) (struct inode *);
  133. void (*drop_inode) (struct inode *);
  134. void (*delete_inode) (struct inode *);
  135. int (*notify_change) (struct dentry *, struct iattr *);
  136. void (*put_super) (struct super_block *);
  137. void (*write_super) (struct super_block *);
  138. int (*statfs) (struct super_block *, struct statfs *, int);
  139. int (*remount_fs) (struct super_block *, int *, char *);
  140. void (*clear_inode) (struct inode *);
  141. };
  142. All methods are called without any locks being held, unless otherwise
  143. noted. This means that most methods can block safely. All methods are
  144. only called from a process context (i.e. not from an interrupt handler
  145. or bottom half).
  146. read_inode: this method is called to read a specific inode from the
  147. mounted filesystem. The "i_ino" member in the "struct inode"
  148. will be initialised by the VFS to indicate which inode to
  149. read. Other members are filled in by this method
  150. write_inode: this method is called when the VFS needs to write an
  151. inode to disc. The second parameter indicates whether the write
  152. should be synchronous or not, not all filesystems check this flag.
  153. put_inode: called when the VFS inode is removed from the inode
  154. cache. This method is optional
  155. drop_inode: called when the last access to the inode is dropped,
  156. with the inode_lock spinlock held.
  157. This method should be either NULL (normal unix filesystem
  158. semantics) or "generic_delete_inode" (for filesystems that do not
  159. want to cache inodes - causing "delete_inode" to always be
  160. called regardless of the value of i_nlink)
  161. The "generic_delete_inode()" behaviour is equivalent to the
  162. old practice of using "force_delete" in the put_inode() case,
  163. but does not have the races that the "force_delete()" approach
  164. had.
  165. delete_inode: called when the VFS wants to delete an inode
  166. notify_change: called when VFS inode attributes are changed. If this
  167. is NULL the VFS falls back to the write_inode() method. This
  168. is called with the kernel lock held
  169. put_super: called when the VFS wishes to free the superblock
  170. (i.e. unmount). This is called with the superblock lock held
  171. write_super: called when the VFS superblock needs to be written to
  172. disc. This method is optional
  173. statfs: called when the VFS needs to get filesystem statistics. This
  174. is called with the kernel lock held
  175. remount_fs: called when the filesystem is remounted. This is called
  176. with the kernel lock held
  177. clear_inode: called then the VFS clears the inode. Optional
  178. The read_inode() method is responsible for filling in the "i_op"
  179. field. This is a pointer to a "struct inode_operations" which
  180. describes the methods that can be performed on individual inodes.
  181. struct inode_operations <section>
  182. =======================
  183. This describes how the VFS can manipulate an inode in your
  184. filesystem. As of kernel 2.1.99, the following members are defined:
  185. struct inode_operations {
  186. struct file_operations * default_file_ops;
  187. int (*create) (struct inode *,struct dentry *,int);
  188. int (*lookup) (struct inode *,struct dentry *);
  189. int (*link) (struct dentry *,struct inode *,struct dentry *);
  190. int (*unlink) (struct inode *,struct dentry *);
  191. int (*symlink) (struct inode *,struct dentry *,const char *);
  192. int (*mkdir) (struct inode *,struct dentry *,int);
  193. int (*rmdir) (struct inode *,struct dentry *);
  194. int (*mknod) (struct inode *,struct dentry *,int,dev_t);
  195. int (*rename) (struct inode *, struct dentry *,
  196. struct inode *, struct dentry *);
  197. int (*readlink) (struct dentry *, char *,int);
  198. struct dentry * (*follow_link) (struct dentry *, struct dentry *);
  199. int (*readpage) (struct file *, struct page *);
  200. int (*writepage) (struct page *page, struct writeback_control *wbc);
  201. int (*bmap) (struct inode *,int);
  202. void (*truncate) (struct inode *);
  203. int (*permission) (struct inode *, int);
  204. int (*smap) (struct inode *,int);
  205. int (*updatepage) (struct file *, struct page *, const char *,
  206. unsigned long, unsigned int, int);
  207. int (*revalidate) (struct dentry *);
  208. };
  209. Again, all methods are called without any locks being held, unless
  210. otherwise noted.
  211. default_file_ops: this is a pointer to a "struct file_operations"
  212. which describes how to open and then manipulate open files
  213. create: called by the open(2) and creat(2) system calls. Only
  214. required if you want to support regular files. The dentry you
  215. get should not have an inode (i.e. it should be a negative
  216. dentry). Here you will probably call d_instantiate() with the
  217. dentry and the newly created inode
  218. lookup: called when the VFS needs to look up an inode in a parent
  219. directory. The name to look for is found in the dentry. This
  220. method must call d_add() to insert the found inode into the
  221. dentry. The "i_count" field in the inode structure should be
  222. incremented. If the named inode does not exist a NULL inode
  223. should be inserted into the dentry (this is called a negative
  224. dentry). Returning an error code from this routine must only
  225. be done on a real error, otherwise creating inodes with system
  226. calls like create(2), mknod(2), mkdir(2) and so on will fail.
  227. If you wish to overload the dentry methods then you should
  228. initialise the "d_dop" field in the dentry; this is a pointer
  229. to a struct "dentry_operations".
  230. This method is called with the directory inode semaphore held
  231. link: called by the link(2) system call. Only required if you want
  232. to support hard links. You will probably need to call
  233. d_instantiate() just as you would in the create() method
  234. unlink: called by the unlink(2) system call. Only required if you
  235. want to support deleting inodes
  236. symlink: called by the symlink(2) system call. Only required if you
  237. want to support symlinks. You will probably need to call
  238. d_instantiate() just as you would in the create() method
  239. mkdir: called by the mkdir(2) system call. Only required if you want
  240. to support creating subdirectories. You will probably need to
  241. call d_instantiate() just as you would in the create() method
  242. rmdir: called by the rmdir(2) system call. Only required if you want
  243. to support deleting subdirectories
  244. mknod: called by the mknod(2) system call to create a device (char,
  245. block) inode or a named pipe (FIFO) or socket. Only required
  246. if you want to support creating these types of inodes. You
  247. will probably need to call d_instantiate() just as you would
  248. in the create() method
  249. readlink: called by the readlink(2) system call. Only required if
  250. you want to support reading symbolic links
  251. follow_link: called by the VFS to follow a symbolic link to the
  252. inode it points to. Only required if you want to support
  253. symbolic links
  254. struct file_operations <section>
  255. ======================
  256. This describes how the VFS can manipulate an open file. As of kernel
  257. 2.1.99, the following members are defined:
  258. struct file_operations {
  259. loff_t (*llseek) (struct file *, loff_t, int);
  260. ssize_t (*read) (struct file *, char *, size_t, loff_t *);
  261. ssize_t (*write) (struct file *, const char *, size_t, loff_t *);
  262. int (*readdir) (struct file *, void *, filldir_t);
  263. unsigned int (*poll) (struct file *, struct poll_table_struct *);
  264. int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
  265. int (*mmap) (struct file *, struct vm_area_struct *);
  266. int (*open) (struct inode *, struct file *);
  267. int (*release) (struct inode *, struct file *);
  268. int (*fsync) (struct file *, struct dentry *);
  269. int (*fasync) (struct file *, int);
  270. int (*check_media_change) (kdev_t dev);
  271. int (*revalidate) (kdev_t dev);
  272. int (*lock) (struct file *, int, struct file_lock *);
  273. };
  274. Again, all methods are called without any locks being held, unless
  275. otherwise noted.
  276. llseek: called when the VFS needs to move the file position index
  277. read: called by read(2) and related system calls
  278. write: called by write(2) and related system calls
  279. readdir: called when the VFS needs to read the directory contents
  280. poll: called by the VFS when a process wants to check if there is
  281. activity on this file and (optionally) go to sleep until there
  282. is activity. Called by the select(2) and poll(2) system calls
  283. ioctl: called by the ioctl(2) system call
  284. mmap: called by the mmap(2) system call
  285. open: called by the VFS when an inode should be opened. When the VFS
  286. opens a file, it creates a new "struct file" and initialises
  287. the "f_op" file operations member with the "default_file_ops"
  288. field in the inode structure. It then calls the open method
  289. for the newly allocated file structure. You might think that
  290. the open method really belongs in "struct inode_operations",
  291. and you may be right. I think it's done the way it is because
  292. it makes filesystems simpler to implement. The open() method
  293. is a good place to initialise the "private_data" member in the
  294. file structure if you want to point to a device structure
  295. release: called when the last reference to an open file is closed
  296. fsync: called by the fsync(2) system call
  297. fasync: called by the fcntl(2) system call when asynchronous
  298. (non-blocking) mode is enabled for a file
  299. Note that the file operations are implemented by the specific
  300. filesystem in which the inode resides. When opening a device node
  301. (character or block special) most filesystems will call special
  302. support routines in the VFS which will locate the required device
  303. driver information. These support routines replace the filesystem file
  304. operations with those for the device driver, and then proceed to call
  305. the new open() method for the file. This is how opening a device file
  306. in the filesystem eventually ends up calling the device driver open()
  307. method. Note the devfs (the Device FileSystem) has a more direct path
  308. from device node to device driver (this is an unofficial kernel
  309. patch).
  310. Directory Entry Cache (dcache) <section>
  311. ------------------------------
  312. struct dentry_operations
  313. ========================
  314. This describes how a filesystem can overload the standard dentry
  315. operations. Dentries and the dcache are the domain of the VFS and the
  316. individual filesystem implementations. Device drivers have no business
  317. here. These methods may be set to NULL, as they are either optional or
  318. the VFS uses a default. As of kernel 2.1.99, the following members are
  319. defined:
  320. struct dentry_operations {
  321. int (*d_revalidate)(struct dentry *);
  322. int (*d_hash) (struct dentry *, struct qstr *);
  323. int (*d_compare) (struct dentry *, struct qstr *, struct qstr *);
  324. void (*d_delete)(struct dentry *);
  325. void (*d_release)(struct dentry *);
  326. void (*d_iput)(struct dentry *, struct inode *);
  327. };
  328. d_revalidate: called when the VFS needs to revalidate a dentry. This
  329. is called whenever a name look-up finds a dentry in the
  330. dcache. Most filesystems leave this as NULL, because all their
  331. dentries in the dcache are valid
  332. d_hash: called when the VFS adds a dentry to the hash table
  333. d_compare: called when a dentry should be compared with another
  334. d_delete: called when the last reference to a dentry is
  335. deleted. This means no-one is using the dentry, however it is
  336. still valid and in the dcache
  337. d_release: called when a dentry is really deallocated
  338. d_iput: called when a dentry loses its inode (just prior to its
  339. being deallocated). The default when this is NULL is that the
  340. VFS calls iput(). If you define this method, you must call
  341. iput() yourself
  342. Each dentry has a pointer to its parent dentry, as well as a hash list
  343. of child dentries. Child dentries are basically like files in a
  344. directory.
  345. Directory Entry Cache APIs
  346. --------------------------
  347. There are a number of functions defined which permit a filesystem to
  348. manipulate dentries:
  349. dget: open a new handle for an existing dentry (this just increments
  350. the usage count)
  351. dput: close a handle for a dentry (decrements the usage count). If
  352. the usage count drops to 0, the "d_delete" method is called
  353. and the dentry is placed on the unused list if the dentry is
  354. still in its parents hash list. Putting the dentry on the
  355. unused list just means that if the system needs some RAM, it
  356. goes through the unused list of dentries and deallocates them.
  357. If the dentry has already been unhashed and the usage count
  358. drops to 0, in this case the dentry is deallocated after the
  359. "d_delete" method is called
  360. d_drop: this unhashes a dentry from its parents hash list. A
  361. subsequent call to dput() will dellocate the dentry if its
  362. usage count drops to 0
  363. d_delete: delete a dentry. If there are no other open references to
  364. the dentry then the dentry is turned into a negative dentry
  365. (the d_iput() method is called). If there are other
  366. references, then d_drop() is called instead
  367. d_add: add a dentry to its parents hash list and then calls
  368. d_instantiate()
  369. d_instantiate: add a dentry to the alias hash list for the inode and
  370. updates the "d_inode" member. The "i_count" member in the
  371. inode structure should be set/incremented. If the inode
  372. pointer is NULL, the dentry is called a "negative
  373. dentry". This function is commonly called when an inode is
  374. created for an existing negative dentry
  375. d_lookup: look up a dentry given its parent and path name component
  376. It looks up the child of that given name from the dcache
  377. hash table. If it is found, the reference count is incremented
  378. and the dentry is returned. The caller must use d_put()
  379. to free the dentry when it finishes using it.
  380. RCU-based dcache locking model
  381. ------------------------------
  382. On many workloads, the most common operation on dcache is
  383. to look up a dentry, given a parent dentry and the name
  384. of the child. Typically, for every open(), stat() etc.,
  385. the dentry corresponding to the pathname will be looked
  386. up by walking the tree starting with the first component
  387. of the pathname and using that dentry along with the next
  388. component to look up the next level and so on. Since it
  389. is a frequent operation for workloads like multiuser
  390. environments and webservers, it is important to optimize
  391. this path.
  392. Prior to 2.5.10, dcache_lock was acquired in d_lookup and thus
  393. in every component during path look-up. Since 2.5.10 onwards,
  394. fastwalk algorithm changed this by holding the dcache_lock
  395. at the beginning and walking as many cached path component
  396. dentries as possible. This signficantly decreases the number
  397. of acquisition of dcache_lock. However it also increases the
  398. lock hold time signficantly and affects performance in large
  399. SMP machines. Since 2.5.62 kernel, dcache has been using
  400. a new locking model that uses RCU to make dcache look-up
  401. lock-free.
  402. The current dcache locking model is not very different from the existing
  403. dcache locking model. Prior to 2.5.62 kernel, dcache_lock
  404. protected the hash chain, d_child, d_alias, d_lru lists as well
  405. as d_inode and several other things like mount look-up. RCU-based
  406. changes affect only the way the hash chain is protected. For everything
  407. else the dcache_lock must be taken for both traversing as well as
  408. updating. The hash chain updations too take the dcache_lock.
  409. The significant change is the way d_lookup traverses the hash chain,
  410. it doesn't acquire the dcache_lock for this and rely on RCU to
  411. ensure that the dentry has not been *freed*.
  412. Dcache locking details
  413. ----------------------
  414. For many multi-user workloads, open() and stat() on files are
  415. very frequently occurring operations. Both involve walking
  416. of path names to find the dentry corresponding to the
  417. concerned file. In 2.4 kernel, dcache_lock was held
  418. during look-up of each path component. Contention and
  419. cacheline bouncing of this global lock caused significant
  420. scalability problems. With the introduction of RCU
  421. in linux kernel, this was worked around by making
  422. the look-up of path components during path walking lock-free.
  423. Safe lock-free look-up of dcache hash table
  424. ===========================================
  425. Dcache is a complex data structure with the hash table entries
  426. also linked together in other lists. In 2.4 kernel, dcache_lock
  427. protected all the lists. We applied RCU only on hash chain
  428. walking. The rest of the lists are still protected by dcache_lock.
  429. Some of the important changes are :
  430. 1. The deletion from hash chain is done using hlist_del_rcu() macro which
  431. doesn't initialize next pointer of the deleted dentry and this
  432. allows us to walk safely lock-free while a deletion is happening.
  433. 2. Insertion of a dentry into the hash table is done using
  434. hlist_add_head_rcu() which take care of ordering the writes -
  435. the writes to the dentry must be visible before the dentry
  436. is inserted. This works in conjuction with hlist_for_each_rcu()
  437. while walking the hash chain. The only requirement is that
  438. all initialization to the dentry must be done before hlist_add_head_rcu()
  439. since we don't have dcache_lock protection while traversing
  440. the hash chain. This isn't different from the existing code.
  441. 3. The dentry looked up without holding dcache_lock by cannot be
  442. returned for walking if it is unhashed. It then may have a NULL
  443. d_inode or other bogosity since RCU doesn't protect the other
  444. fields in the dentry. We therefore use a flag DCACHE_UNHASHED to
  445. indicate unhashed dentries and use this in conjunction with a
  446. per-dentry lock (d_lock). Once looked up without the dcache_lock,
  447. we acquire the per-dentry lock (d_lock) and check if the
  448. dentry is unhashed. If so, the look-up is failed. If not, the
  449. reference count of the dentry is increased and the dentry is returned.
  450. 4. Once a dentry is looked up, it must be ensured during the path
  451. walk for that component it doesn't go away. In pre-2.5.10 code,
  452. this was done holding a reference to the dentry. dcache_rcu does
  453. the same. In some sense, dcache_rcu path walking looks like
  454. the pre-2.5.10 version.
  455. 5. All dentry hash chain updations must take the dcache_lock as well as
  456. the per-dentry lock in that order. dput() does this to ensure
  457. that a dentry that has just been looked up in another CPU
  458. doesn't get deleted before dget() can be done on it.
  459. 6. There are several ways to do reference counting of RCU protected
  460. objects. One such example is in ipv4 route cache where
  461. deferred freeing (using call_rcu()) is done as soon as
  462. the reference count goes to zero. This cannot be done in
  463. the case of dentries because tearing down of dentries
  464. require blocking (dentry_iput()) which isn't supported from
  465. RCU callbacks. Instead, tearing down of dentries happen
  466. synchronously in dput(), but actual freeing happens later
  467. when RCU grace period is over. This allows safe lock-free
  468. walking of the hash chains, but a matched dentry may have
  469. been partially torn down. The checking of DCACHE_UNHASHED
  470. flag with d_lock held detects such dentries and prevents
  471. them from being returned from look-up.
  472. Maintaining POSIX rename semantics
  473. ==================================
  474. Since look-up of dentries is lock-free, it can race against
  475. a concurrent rename operation. For example, during rename
  476. of file A to B, look-up of either A or B must succeed.
  477. So, if look-up of B happens after A has been removed from the
  478. hash chain but not added to the new hash chain, it may fail.
  479. Also, a comparison while the name is being written concurrently
  480. by a rename may result in false positive matches violating
  481. rename semantics. Issues related to race with rename are
  482. handled as described below :
  483. 1. Look-up can be done in two ways - d_lookup() which is safe
  484. from simultaneous renames and __d_lookup() which is not.
  485. If __d_lookup() fails, it must be followed up by a d_lookup()
  486. to correctly determine whether a dentry is in the hash table
  487. or not. d_lookup() protects look-ups using a sequence
  488. lock (rename_lock).
  489. 2. The name associated with a dentry (d_name) may be changed if
  490. a rename is allowed to happen simultaneously. To avoid memcmp()
  491. in __d_lookup() go out of bounds due to a rename and false
  492. positive comparison, the name comparison is done while holding the
  493. per-dentry lock. This prevents concurrent renames during this
  494. operation.
  495. 3. Hash table walking during look-up may move to a different bucket as
  496. the current dentry is moved to a different bucket due to rename.
  497. But we use hlists in dcache hash table and they are null-terminated.
  498. So, even if a dentry moves to a different bucket, hash chain
  499. walk will terminate. [with a list_head list, it may not since
  500. termination is when the list_head in the original bucket is reached].
  501. Since we redo the d_parent check and compare name while holding
  502. d_lock, lock-free look-up will not race against d_move().
  503. 4. There can be a theoritical race when a dentry keeps coming back
  504. to original bucket due to double moves. Due to this look-up may
  505. consider that it has never moved and can end up in a infinite loop.
  506. But this is not any worse that theoritical livelocks we already
  507. have in the kernel.
  508. Important guidelines for filesystem developers related to dcache_rcu
  509. ====================================================================
  510. 1. Existing dcache interfaces (pre-2.5.62) exported to filesystem
  511. don't change. Only dcache internal implementation changes. However
  512. filesystems *must not* delete from the dentry hash chains directly
  513. using the list macros like allowed earlier. They must use dcache
  514. APIs like d_drop() or __d_drop() depending on the situation.
  515. 2. d_flags is now protected by a per-dentry lock (d_lock). All
  516. access to d_flags must be protected by it.
  517. 3. For a hashed dentry, checking of d_count needs to be protected
  518. by d_lock.
  519. Papers and other documentation on dcache locking
  520. ================================================
  521. 1. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).
  522. 2. http://lse.sourceforge.net/locking/dcache/dcache.html