vfs.txt 37 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917
  1. Overview of the Linux Virtual File System
  2. Original author: Richard Gooch <rgooch@atnf.csiro.au>
  3. Last updated on October 28, 2005
  4. Copyright (C) 1999 Richard Gooch
  5. Copyright (C) 2005 Pekka Enberg
  6. This file is released under the GPLv2.
  7. Introduction
  8. ============
  9. The Virtual File System (also known as the Virtual Filesystem Switch)
  10. is the software layer in the kernel that provides the filesystem
  11. interface to userspace programs. It also provides an abstraction
  12. within the kernel which allows different filesystem implementations to
  13. coexist.
  14. VFS system calls open(2), stat(2), read(2), write(2), chmod(2) and so
  15. on are called from a process context. Filesystem locking is described
  16. in the document Documentation/filesystems/Locking.
  17. Directory Entry Cache (dcache)
  18. ------------------------------
  19. The VFS implements the open(2), stat(2), chmod(2), and similar system
  20. calls. The pathname argument that is passed to them is used by the VFS
  21. to search through the directory entry cache (also known as the dentry
  22. cache or dcache). This provides a very fast look-up mechanism to
  23. translate a pathname (filename) into a specific dentry. Dentries live
  24. in RAM and are never saved to disc: they exist only for performance.
  25. The dentry cache is meant to be a view into your entire filespace. As
  26. most computers cannot fit all dentries in the RAM at the same time,
  27. some bits of the cache are missing. In order to resolve your pathname
  28. into a dentry, the VFS may have to resort to creating dentries along
  29. the way, and then loading the inode. This is done by looking up the
  30. inode.
  31. The Inode Object
  32. ----------------
  33. An individual dentry usually has a pointer to an inode. Inodes are
  34. filesystem objects such as regular files, directories, FIFOs and other
  35. beasts. They live either on the disc (for block device filesystems)
  36. or in the memory (for pseudo filesystems). Inodes that live on the
  37. disc are copied into the memory when required and changes to the inode
  38. are written back to disc. A single inode can be pointed to by multiple
  39. dentries (hard links, for example, do this).
  40. To look up an inode requires that the VFS calls the lookup() method of
  41. the parent directory inode. This method is installed by the specific
  42. filesystem implementation that the inode lives in. Once the VFS has
  43. the required dentry (and hence the inode), we can do all those boring
  44. things like open(2) the file, or stat(2) it to peek at the inode
  45. data. The stat(2) operation is fairly simple: once the VFS has the
  46. dentry, it peeks at the inode data and passes some of it back to
  47. userspace.
  48. The File Object
  49. ---------------
  50. Opening a file requires another operation: allocation of a file
  51. structure (this is the kernel-side implementation of file
  52. descriptors). The freshly allocated file structure is initialized with
  53. a pointer to the dentry and a set of file operation member functions.
  54. These are taken from the inode data. The open() file method is then
  55. called so the specific filesystem implementation can do it's work. You
  56. can see that this is another switch performed by the VFS. The file
  57. structure is placed into the file descriptor table for the process.
  58. Reading, writing and closing files (and other assorted VFS operations)
  59. is done by using the userspace file descriptor to grab the appropriate
  60. file structure, and then calling the required file structure method to
  61. do whatever is required. For as long as the file is open, it keeps the
  62. dentry in use, which in turn means that the VFS inode is still in use.
  63. Registering and Mounting a Filesystem
  64. =====================================
  65. To register and unregister a filesystem, use the following API
  66. functions:
  67. #include <linux/fs.h>
  68. extern int register_filesystem(struct file_system_type *);
  69. extern int unregister_filesystem(struct file_system_type *);
  70. The passed struct file_system_type describes your filesystem. When a
  71. request is made to mount a device onto a directory in your filespace,
  72. the VFS will call the appropriate get_sb() method for the specific
  73. filesystem. The dentry for the mount point will then be updated to
  74. point to the root inode for the new filesystem.
  75. You can see all filesystems that are registered to the kernel in the
  76. file /proc/filesystems.
  77. struct file_system_type
  78. -----------------------
  79. This describes the filesystem. As of kernel 2.6.13, the following
  80. members are defined:
  81. struct file_system_type {
  82. const char *name;
  83. int fs_flags;
  84. struct super_block *(*get_sb) (struct file_system_type *, int,
  85. const char *, void *);
  86. void (*kill_sb) (struct super_block *);
  87. struct module *owner;
  88. struct file_system_type * next;
  89. struct list_head fs_supers;
  90. };
  91. name: the name of the filesystem type, such as "ext2", "iso9660",
  92. "msdos" and so on
  93. fs_flags: various flags (i.e. FS_REQUIRES_DEV, FS_NO_DCACHE, etc.)
  94. get_sb: the method to call when a new instance of this
  95. filesystem should be mounted
  96. kill_sb: the method to call when an instance of this filesystem
  97. should be unmounted
  98. owner: for internal VFS use: you should initialize this to THIS_MODULE in
  99. most cases.
  100. next: for internal VFS use: you should initialize this to NULL
  101. The get_sb() method has the following arguments:
  102. struct super_block *sb: the superblock structure. This is partially
  103. initialized by the VFS and the rest must be initialized by the
  104. get_sb() method
  105. int flags: mount flags
  106. const char *dev_name: the device name we are mounting.
  107. void *data: arbitrary mount options, usually comes as an ASCII
  108. string
  109. int silent: whether or not to be silent on error
  110. The get_sb() method must determine if the block device specified
  111. in the superblock contains a filesystem of the type the method
  112. supports. On success the method returns the superblock pointer, on
  113. failure it returns NULL.
  114. The most interesting member of the superblock structure that the
  115. get_sb() method fills in is the "s_op" field. This is a pointer to
  116. a "struct super_operations" which describes the next level of the
  117. filesystem implementation.
  118. Usually, a filesystem uses generic one of the generic get_sb()
  119. implementations and provides a fill_super() method instead. The
  120. generic methods are:
  121. get_sb_bdev: mount a filesystem residing on a block device
  122. get_sb_nodev: mount a filesystem that is not backed by a device
  123. get_sb_single: mount a filesystem which shares the instance between
  124. all mounts
  125. A fill_super() method implementation has the following arguments:
  126. struct super_block *sb: the superblock structure. The method fill_super()
  127. must initialize this properly.
  128. void *data: arbitrary mount options, usually comes as an ASCII
  129. string
  130. int silent: whether or not to be silent on error
  131. The Superblock Object
  132. =====================
  133. A superblock object represents a mounted filesystem.
  134. struct super_operations
  135. -----------------------
  136. This describes how the VFS can manipulate the superblock of your
  137. filesystem. As of kernel 2.6.13, the following members are defined:
  138. struct super_operations {
  139. struct inode *(*alloc_inode)(struct super_block *sb);
  140. void (*destroy_inode)(struct inode *);
  141. void (*read_inode) (struct inode *);
  142. void (*dirty_inode) (struct inode *);
  143. int (*write_inode) (struct inode *, int);
  144. void (*put_inode) (struct inode *);
  145. void (*drop_inode) (struct inode *);
  146. void (*delete_inode) (struct inode *);
  147. void (*put_super) (struct super_block *);
  148. void (*write_super) (struct super_block *);
  149. int (*sync_fs)(struct super_block *sb, int wait);
  150. void (*write_super_lockfs) (struct super_block *);
  151. void (*unlockfs) (struct super_block *);
  152. int (*statfs) (struct super_block *, struct kstatfs *);
  153. int (*remount_fs) (struct super_block *, int *, char *);
  154. void (*clear_inode) (struct inode *);
  155. void (*umount_begin) (struct super_block *);
  156. void (*sync_inodes) (struct super_block *sb,
  157. struct writeback_control *wbc);
  158. int (*show_options)(struct seq_file *, struct vfsmount *);
  159. ssize_t (*quota_read)(struct super_block *, int, char *, size_t, loff_t);
  160. ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t);
  161. };
  162. All methods are called without any locks being held, unless otherwise
  163. noted. This means that most methods can block safely. All methods are
  164. only called from a process context (i.e. not from an interrupt handler
  165. or bottom half).
  166. alloc_inode: this method is called by inode_alloc() to allocate memory
  167. for struct inode and initialize it.
  168. destroy_inode: this method is called by destroy_inode() to release
  169. resources allocated for struct inode.
  170. read_inode: this method is called to read a specific inode from the
  171. mounted filesystem. The i_ino member in the struct inode is
  172. initialized by the VFS to indicate which inode to read. Other
  173. members are filled in by this method.
  174. You can set this to NULL and use iget5_locked() instead of iget()
  175. to read inodes. This is necessary for filesystems for which the
  176. inode number is not sufficient to identify an inode.
  177. dirty_inode: this method is called by the VFS to mark an inode dirty.
  178. write_inode: this method is called when the VFS needs to write an
  179. inode to disc. The second parameter indicates whether the write
  180. should be synchronous or not, not all filesystems check this flag.
  181. put_inode: called when the VFS inode is removed from the inode
  182. cache.
  183. drop_inode: called when the last access to the inode is dropped,
  184. with the inode_lock spinlock held.
  185. This method should be either NULL (normal UNIX filesystem
  186. semantics) or "generic_delete_inode" (for filesystems that do not
  187. want to cache inodes - causing "delete_inode" to always be
  188. called regardless of the value of i_nlink)
  189. The "generic_delete_inode()" behavior is equivalent to the
  190. old practice of using "force_delete" in the put_inode() case,
  191. but does not have the races that the "force_delete()" approach
  192. had.
  193. delete_inode: called when the VFS wants to delete an inode
  194. put_super: called when the VFS wishes to free the superblock
  195. (i.e. unmount). This is called with the superblock lock held
  196. write_super: called when the VFS superblock needs to be written to
  197. disc. This method is optional
  198. sync_fs: called when VFS is writing out all dirty data associated with
  199. a superblock. The second parameter indicates whether the method
  200. should wait until the write out has been completed. Optional.
  201. write_super_lockfs: called when VFS is locking a filesystem and
  202. forcing it into a consistent state. This method is currently
  203. used by the Logical Volume Manager (LVM).
  204. unlockfs: called when VFS is unlocking a filesystem and making it writable
  205. again.
  206. statfs: called when the VFS needs to get filesystem statistics. This
  207. is called with the kernel lock held
  208. remount_fs: called when the filesystem is remounted. This is called
  209. with the kernel lock held
  210. clear_inode: called then the VFS clears the inode. Optional
  211. umount_begin: called when the VFS is unmounting a filesystem.
  212. sync_inodes: called when the VFS is writing out dirty data associated with
  213. a superblock.
  214. show_options: called by the VFS to show mount options for /proc/<pid>/mounts.
  215. quota_read: called by the VFS to read from filesystem quota file.
  216. quota_write: called by the VFS to write to filesystem quota file.
  217. The read_inode() method is responsible for filling in the "i_op"
  218. field. This is a pointer to a "struct inode_operations" which
  219. describes the methods that can be performed on individual inodes.
  220. The Inode Object
  221. ================
  222. An inode object represents an object within the filesystem.
  223. struct inode_operations
  224. -----------------------
  225. This describes how the VFS can manipulate an inode in your
  226. filesystem. As of kernel 2.6.13, the following members are defined:
  227. struct inode_operations {
  228. int (*create) (struct inode *,struct dentry *,int, struct nameidata *);
  229. struct dentry * (*lookup) (struct inode *,struct dentry *, struct nameidata *);
  230. int (*link) (struct dentry *,struct inode *,struct dentry *);
  231. int (*unlink) (struct inode *,struct dentry *);
  232. int (*symlink) (struct inode *,struct dentry *,const char *);
  233. int (*mkdir) (struct inode *,struct dentry *,int);
  234. int (*rmdir) (struct inode *,struct dentry *);
  235. int (*mknod) (struct inode *,struct dentry *,int,dev_t);
  236. int (*rename) (struct inode *, struct dentry *,
  237. struct inode *, struct dentry *);
  238. int (*readlink) (struct dentry *, char __user *,int);
  239. void * (*follow_link) (struct dentry *, struct nameidata *);
  240. void (*put_link) (struct dentry *, struct nameidata *, void *);
  241. void (*truncate) (struct inode *);
  242. int (*permission) (struct inode *, int, struct nameidata *);
  243. int (*setattr) (struct dentry *, struct iattr *);
  244. int (*getattr) (struct vfsmount *mnt, struct dentry *, struct kstat *);
  245. int (*setxattr) (struct dentry *, const char *,const void *,size_t,int);
  246. ssize_t (*getxattr) (struct dentry *, const char *, void *, size_t);
  247. ssize_t (*listxattr) (struct dentry *, char *, size_t);
  248. int (*removexattr) (struct dentry *, const char *);
  249. };
  250. Again, all methods are called without any locks being held, unless
  251. otherwise noted.
  252. create: called by the open(2) and creat(2) system calls. Only
  253. required if you want to support regular files. The dentry you
  254. get should not have an inode (i.e. it should be a negative
  255. dentry). Here you will probably call d_instantiate() with the
  256. dentry and the newly created inode
  257. lookup: called when the VFS needs to look up an inode in a parent
  258. directory. The name to look for is found in the dentry. This
  259. method must call d_add() to insert the found inode into the
  260. dentry. The "i_count" field in the inode structure should be
  261. incremented. If the named inode does not exist a NULL inode
  262. should be inserted into the dentry (this is called a negative
  263. dentry). Returning an error code from this routine must only
  264. be done on a real error, otherwise creating inodes with system
  265. calls like create(2), mknod(2), mkdir(2) and so on will fail.
  266. If you wish to overload the dentry methods then you should
  267. initialise the "d_dop" field in the dentry; this is a pointer
  268. to a struct "dentry_operations".
  269. This method is called with the directory inode semaphore held
  270. link: called by the link(2) system call. Only required if you want
  271. to support hard links. You will probably need to call
  272. d_instantiate() just as you would in the create() method
  273. unlink: called by the unlink(2) system call. Only required if you
  274. want to support deleting inodes
  275. symlink: called by the symlink(2) system call. Only required if you
  276. want to support symlinks. You will probably need to call
  277. d_instantiate() just as you would in the create() method
  278. mkdir: called by the mkdir(2) system call. Only required if you want
  279. to support creating subdirectories. You will probably need to
  280. call d_instantiate() just as you would in the create() method
  281. rmdir: called by the rmdir(2) system call. Only required if you want
  282. to support deleting subdirectories
  283. mknod: called by the mknod(2) system call to create a device (char,
  284. block) inode or a named pipe (FIFO) or socket. Only required
  285. if you want to support creating these types of inodes. You
  286. will probably need to call d_instantiate() just as you would
  287. in the create() method
  288. rename: called by the rename(2) system call to rename the object to
  289. have the parent and name given by the second inode and dentry.
  290. readlink: called by the readlink(2) system call. Only required if
  291. you want to support reading symbolic links
  292. follow_link: called by the VFS to follow a symbolic link to the
  293. inode it points to. Only required if you want to support
  294. symbolic links. This method returns a void pointer cookie
  295. that is passed to put_link().
  296. put_link: called by the VFS to release resources allocated by
  297. follow_link(). The cookie returned by follow_link() is passed
  298. to to this method as the last parameter. It is used by
  299. filesystems such as NFS where page cache is not stable
  300. (i.e. page that was installed when the symbolic link walk
  301. started might not be in the page cache at the end of the
  302. walk).
  303. truncate: called by the VFS to change the size of a file. The
  304. i_size field of the inode is set to the desired size by the
  305. VFS before this method is called. This method is called by
  306. the truncate(2) system call and related functionality.
  307. permission: called by the VFS to check for access rights on a POSIX-like
  308. filesystem.
  309. setattr: called by the VFS to set attributes for a file. This method
  310. is called by chmod(2) and related system calls.
  311. getattr: called by the VFS to get attributes of a file. This method
  312. is called by stat(2) and related system calls.
  313. setxattr: called by the VFS to set an extended attribute for a file.
  314. Extended attribute is a name:value pair associated with an
  315. inode. This method is called by setxattr(2) system call.
  316. getxattr: called by the VFS to retrieve the value of an extended
  317. attribute name. This method is called by getxattr(2) function
  318. call.
  319. listxattr: called by the VFS to list all extended attributes for a
  320. given file. This method is called by listxattr(2) system call.
  321. removexattr: called by the VFS to remove an extended attribute from
  322. a file. This method is called by removexattr(2) system call.
  323. The Address Space Object
  324. ========================
  325. The address space object is used to identify pages in the page cache.
  326. struct address_space_operations
  327. -------------------------------
  328. This describes how the VFS can manipulate mapping of a file to page cache in
  329. your filesystem. As of kernel 2.6.13, the following members are defined:
  330. struct address_space_operations {
  331. int (*writepage)(struct page *page, struct writeback_control *wbc);
  332. int (*readpage)(struct file *, struct page *);
  333. int (*sync_page)(struct page *);
  334. int (*writepages)(struct address_space *, struct writeback_control *);
  335. int (*set_page_dirty)(struct page *page);
  336. int (*readpages)(struct file *filp, struct address_space *mapping,
  337. struct list_head *pages, unsigned nr_pages);
  338. int (*prepare_write)(struct file *, struct page *, unsigned, unsigned);
  339. int (*commit_write)(struct file *, struct page *, unsigned, unsigned);
  340. sector_t (*bmap)(struct address_space *, sector_t);
  341. int (*invalidatepage) (struct page *, unsigned long);
  342. int (*releasepage) (struct page *, int);
  343. ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
  344. loff_t offset, unsigned long nr_segs);
  345. struct page* (*get_xip_page)(struct address_space *, sector_t,
  346. int);
  347. };
  348. writepage: called by the VM write a dirty page to backing store.
  349. readpage: called by the VM to read a page from backing store.
  350. sync_page: called by the VM to notify the backing store to perform all
  351. queued I/O operations for a page. I/O operations for other pages
  352. associated with this address_space object may also be performed.
  353. writepages: called by the VM to write out pages associated with the
  354. address_space object.
  355. set_page_dirty: called by the VM to set a page dirty.
  356. readpages: called by the VM to read pages associated with the address_space
  357. object.
  358. prepare_write: called by the generic write path in VM to set up a write
  359. request for a page.
  360. commit_write: called by the generic write path in VM to write page to
  361. its backing store.
  362. bmap: called by the VFS to map a logical block offset within object to
  363. physical block number. This method is use by for the legacy FIBMAP
  364. ioctl. Other uses are discouraged.
  365. invalidatepage: called by the VM on truncate to disassociate a page from its
  366. address_space mapping.
  367. releasepage: called by the VFS to release filesystem specific metadata from
  368. a page.
  369. direct_IO: called by the VM for direct I/O writes and reads.
  370. get_xip_page: called by the VM to translate a block number to a page.
  371. The page is valid until the corresponding filesystem is unmounted.
  372. Filesystems that want to use execute-in-place (XIP) need to implement
  373. it. An example implementation can be found in fs/ext2/xip.c.
  374. The File Object
  375. ===============
  376. A file object represents a file opened by a process.
  377. struct file_operations
  378. ----------------------
  379. This describes how the VFS can manipulate an open file. As of kernel
  380. 2.6.13, the following members are defined:
  381. struct file_operations {
  382. loff_t (*llseek) (struct file *, loff_t, int);
  383. ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
  384. ssize_t (*aio_read) (struct kiocb *, char __user *, size_t, loff_t);
  385. ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
  386. ssize_t (*aio_write) (struct kiocb *, const char __user *, size_t, loff_t);
  387. int (*readdir) (struct file *, void *, filldir_t);
  388. unsigned int (*poll) (struct file *, struct poll_table_struct *);
  389. int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
  390. long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
  391. long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
  392. int (*mmap) (struct file *, struct vm_area_struct *);
  393. int (*open) (struct inode *, struct file *);
  394. int (*flush) (struct file *);
  395. int (*release) (struct inode *, struct file *);
  396. int (*fsync) (struct file *, struct dentry *, int datasync);
  397. int (*aio_fsync) (struct kiocb *, int datasync);
  398. int (*fasync) (int, struct file *, int);
  399. int (*lock) (struct file *, int, struct file_lock *);
  400. ssize_t (*readv) (struct file *, const struct iovec *, unsigned long, loff_t *);
  401. ssize_t (*writev) (struct file *, const struct iovec *, unsigned long, loff_t *);
  402. ssize_t (*sendfile) (struct file *, loff_t *, size_t, read_actor_t, void *);
  403. ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
  404. unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
  405. int (*check_flags)(int);
  406. int (*dir_notify)(struct file *filp, unsigned long arg);
  407. int (*flock) (struct file *, int, struct file_lock *);
  408. };
  409. Again, all methods are called without any locks being held, unless
  410. otherwise noted.
  411. llseek: called when the VFS needs to move the file position index
  412. read: called by read(2) and related system calls
  413. aio_read: called by io_submit(2) and other asynchronous I/O operations
  414. write: called by write(2) and related system calls
  415. aio_write: called by io_submit(2) and other asynchronous I/O operations
  416. readdir: called when the VFS needs to read the directory contents
  417. poll: called by the VFS when a process wants to check if there is
  418. activity on this file and (optionally) go to sleep until there
  419. is activity. Called by the select(2) and poll(2) system calls
  420. ioctl: called by the ioctl(2) system call
  421. unlocked_ioctl: called by the ioctl(2) system call. Filesystems that do not
  422. require the BKL should use this method instead of the ioctl() above.
  423. compat_ioctl: called by the ioctl(2) system call when 32 bit system calls
  424. are used on 64 bit kernels.
  425. mmap: called by the mmap(2) system call
  426. open: called by the VFS when an inode should be opened. When the VFS
  427. opens a file, it creates a new "struct file". It then calls the
  428. open method for the newly allocated file structure. You might
  429. think that the open method really belongs in
  430. "struct inode_operations", and you may be right. I think it's
  431. done the way it is because it makes filesystems simpler to
  432. implement. The open() method is a good place to initialize the
  433. "private_data" member in the file structure if you want to point
  434. to a device structure
  435. flush: called by the close(2) system call to flush a file
  436. release: called when the last reference to an open file is closed
  437. fsync: called by the fsync(2) system call
  438. fasync: called by the fcntl(2) system call when asynchronous
  439. (non-blocking) mode is enabled for a file
  440. lock: called by the fcntl(2) system call for F_GETLK, F_SETLK, and F_SETLKW
  441. commands
  442. readv: called by the readv(2) system call
  443. writev: called by the writev(2) system call
  444. sendfile: called by the sendfile(2) system call
  445. get_unmapped_area: called by the mmap(2) system call
  446. check_flags: called by the fcntl(2) system call for F_SETFL command
  447. dir_notify: called by the fcntl(2) system call for F_NOTIFY command
  448. flock: called by the flock(2) system call
  449. Note that the file operations are implemented by the specific
  450. filesystem in which the inode resides. When opening a device node
  451. (character or block special) most filesystems will call special
  452. support routines in the VFS which will locate the required device
  453. driver information. These support routines replace the filesystem file
  454. operations with those for the device driver, and then proceed to call
  455. the new open() method for the file. This is how opening a device file
  456. in the filesystem eventually ends up calling the device driver open()
  457. method.
  458. Directory Entry Cache (dcache)
  459. ==============================
  460. struct dentry_operations
  461. ------------------------
  462. This describes how a filesystem can overload the standard dentry
  463. operations. Dentries and the dcache are the domain of the VFS and the
  464. individual filesystem implementations. Device drivers have no business
  465. here. These methods may be set to NULL, as they are either optional or
  466. the VFS uses a default. As of kernel 2.6.13, the following members are
  467. defined:
  468. struct dentry_operations {
  469. int (*d_revalidate)(struct dentry *, struct nameidata *);
  470. int (*d_hash) (struct dentry *, struct qstr *);
  471. int (*d_compare) (struct dentry *, struct qstr *, struct qstr *);
  472. int (*d_delete)(struct dentry *);
  473. void (*d_release)(struct dentry *);
  474. void (*d_iput)(struct dentry *, struct inode *);
  475. };
  476. d_revalidate: called when the VFS needs to revalidate a dentry. This
  477. is called whenever a name look-up finds a dentry in the
  478. dcache. Most filesystems leave this as NULL, because all their
  479. dentries in the dcache are valid
  480. d_hash: called when the VFS adds a dentry to the hash table
  481. d_compare: called when a dentry should be compared with another
  482. d_delete: called when the last reference to a dentry is
  483. deleted. This means no-one is using the dentry, however it is
  484. still valid and in the dcache
  485. d_release: called when a dentry is really deallocated
  486. d_iput: called when a dentry loses its inode (just prior to its
  487. being deallocated). The default when this is NULL is that the
  488. VFS calls iput(). If you define this method, you must call
  489. iput() yourself
  490. Each dentry has a pointer to its parent dentry, as well as a hash list
  491. of child dentries. Child dentries are basically like files in a
  492. directory.
  493. Directory Entry Cache API
  494. --------------------------
  495. There are a number of functions defined which permit a filesystem to
  496. manipulate dentries:
  497. dget: open a new handle for an existing dentry (this just increments
  498. the usage count)
  499. dput: close a handle for a dentry (decrements the usage count). If
  500. the usage count drops to 0, the "d_delete" method is called
  501. and the dentry is placed on the unused list if the dentry is
  502. still in its parents hash list. Putting the dentry on the
  503. unused list just means that if the system needs some RAM, it
  504. goes through the unused list of dentries and deallocates them.
  505. If the dentry has already been unhashed and the usage count
  506. drops to 0, in this case the dentry is deallocated after the
  507. "d_delete" method is called
  508. d_drop: this unhashes a dentry from its parents hash list. A
  509. subsequent call to dput() will deallocate the dentry if its
  510. usage count drops to 0
  511. d_delete: delete a dentry. If there are no other open references to
  512. the dentry then the dentry is turned into a negative dentry
  513. (the d_iput() method is called). If there are other
  514. references, then d_drop() is called instead
  515. d_add: add a dentry to its parents hash list and then calls
  516. d_instantiate()
  517. d_instantiate: add a dentry to the alias hash list for the inode and
  518. updates the "d_inode" member. The "i_count" member in the
  519. inode structure should be set/incremented. If the inode
  520. pointer is NULL, the dentry is called a "negative
  521. dentry". This function is commonly called when an inode is
  522. created for an existing negative dentry
  523. d_lookup: look up a dentry given its parent and path name component
  524. It looks up the child of that given name from the dcache
  525. hash table. If it is found, the reference count is incremented
  526. and the dentry is returned. The caller must use d_put()
  527. to free the dentry when it finishes using it.
  528. RCU-based dcache locking model
  529. ------------------------------
  530. On many workloads, the most common operation on dcache is
  531. to look up a dentry, given a parent dentry and the name
  532. of the child. Typically, for every open(), stat() etc.,
  533. the dentry corresponding to the pathname will be looked
  534. up by walking the tree starting with the first component
  535. of the pathname and using that dentry along with the next
  536. component to look up the next level and so on. Since it
  537. is a frequent operation for workloads like multiuser
  538. environments and web servers, it is important to optimize
  539. this path.
  540. Prior to 2.5.10, dcache_lock was acquired in d_lookup and thus
  541. in every component during path look-up. Since 2.5.10 onwards,
  542. fast-walk algorithm changed this by holding the dcache_lock
  543. at the beginning and walking as many cached path component
  544. dentries as possible. This significantly decreases the number
  545. of acquisition of dcache_lock. However it also increases the
  546. lock hold time significantly and affects performance in large
  547. SMP machines. Since 2.5.62 kernel, dcache has been using
  548. a new locking model that uses RCU to make dcache look-up
  549. lock-free.
  550. The current dcache locking model is not very different from the existing
  551. dcache locking model. Prior to 2.5.62 kernel, dcache_lock
  552. protected the hash chain, d_child, d_alias, d_lru lists as well
  553. as d_inode and several other things like mount look-up. RCU-based
  554. changes affect only the way the hash chain is protected. For everything
  555. else the dcache_lock must be taken for both traversing as well as
  556. updating. The hash chain updates too take the dcache_lock.
  557. The significant change is the way d_lookup traverses the hash chain,
  558. it doesn't acquire the dcache_lock for this and rely on RCU to
  559. ensure that the dentry has not been *freed*.
  560. Dcache locking details
  561. ----------------------
  562. For many multi-user workloads, open() and stat() on files are
  563. very frequently occurring operations. Both involve walking
  564. of path names to find the dentry corresponding to the
  565. concerned file. In 2.4 kernel, dcache_lock was held
  566. during look-up of each path component. Contention and
  567. cache-line bouncing of this global lock caused significant
  568. scalability problems. With the introduction of RCU
  569. in Linux kernel, this was worked around by making
  570. the look-up of path components during path walking lock-free.
  571. Safe lock-free look-up of dcache hash table
  572. ===========================================
  573. Dcache is a complex data structure with the hash table entries
  574. also linked together in other lists. In 2.4 kernel, dcache_lock
  575. protected all the lists. We applied RCU only on hash chain
  576. walking. The rest of the lists are still protected by dcache_lock.
  577. Some of the important changes are :
  578. 1. The deletion from hash chain is done using hlist_del_rcu() macro which
  579. doesn't initialize next pointer of the deleted dentry and this
  580. allows us to walk safely lock-free while a deletion is happening.
  581. 2. Insertion of a dentry into the hash table is done using
  582. hlist_add_head_rcu() which take care of ordering the writes -
  583. the writes to the dentry must be visible before the dentry
  584. is inserted. This works in conjunction with hlist_for_each_rcu()
  585. while walking the hash chain. The only requirement is that
  586. all initialization to the dentry must be done before hlist_add_head_rcu()
  587. since we don't have dcache_lock protection while traversing
  588. the hash chain. This isn't different from the existing code.
  589. 3. The dentry looked up without holding dcache_lock by cannot be
  590. returned for walking if it is unhashed. It then may have a NULL
  591. d_inode or other bogosity since RCU doesn't protect the other
  592. fields in the dentry. We therefore use a flag DCACHE_UNHASHED to
  593. indicate unhashed dentries and use this in conjunction with a
  594. per-dentry lock (d_lock). Once looked up without the dcache_lock,
  595. we acquire the per-dentry lock (d_lock) and check if the
  596. dentry is unhashed. If so, the look-up is failed. If not, the
  597. reference count of the dentry is increased and the dentry is returned.
  598. 4. Once a dentry is looked up, it must be ensured during the path
  599. walk for that component it doesn't go away. In pre-2.5.10 code,
  600. this was done holding a reference to the dentry. dcache_rcu does
  601. the same. In some sense, dcache_rcu path walking looks like
  602. the pre-2.5.10 version.
  603. 5. All dentry hash chain updates must take the dcache_lock as well as
  604. the per-dentry lock in that order. dput() does this to ensure
  605. that a dentry that has just been looked up in another CPU
  606. doesn't get deleted before dget() can be done on it.
  607. 6. There are several ways to do reference counting of RCU protected
  608. objects. One such example is in ipv4 route cache where
  609. deferred freeing (using call_rcu()) is done as soon as
  610. the reference count goes to zero. This cannot be done in
  611. the case of dentries because tearing down of dentries
  612. require blocking (dentry_iput()) which isn't supported from
  613. RCU callbacks. Instead, tearing down of dentries happen
  614. synchronously in dput(), but actual freeing happens later
  615. when RCU grace period is over. This allows safe lock-free
  616. walking of the hash chains, but a matched dentry may have
  617. been partially torn down. The checking of DCACHE_UNHASHED
  618. flag with d_lock held detects such dentries and prevents
  619. them from being returned from look-up.
  620. Maintaining POSIX rename semantics
  621. ==================================
  622. Since look-up of dentries is lock-free, it can race against
  623. a concurrent rename operation. For example, during rename
  624. of file A to B, look-up of either A or B must succeed.
  625. So, if look-up of B happens after A has been removed from the
  626. hash chain but not added to the new hash chain, it may fail.
  627. Also, a comparison while the name is being written concurrently
  628. by a rename may result in false positive matches violating
  629. rename semantics. Issues related to race with rename are
  630. handled as described below :
  631. 1. Look-up can be done in two ways - d_lookup() which is safe
  632. from simultaneous renames and __d_lookup() which is not.
  633. If __d_lookup() fails, it must be followed up by a d_lookup()
  634. to correctly determine whether a dentry is in the hash table
  635. or not. d_lookup() protects look-ups using a sequence
  636. lock (rename_lock).
  637. 2. The name associated with a dentry (d_name) may be changed if
  638. a rename is allowed to happen simultaneously. To avoid memcmp()
  639. in __d_lookup() go out of bounds due to a rename and false
  640. positive comparison, the name comparison is done while holding the
  641. per-dentry lock. This prevents concurrent renames during this
  642. operation.
  643. 3. Hash table walking during look-up may move to a different bucket as
  644. the current dentry is moved to a different bucket due to rename.
  645. But we use hlists in dcache hash table and they are null-terminated.
  646. So, even if a dentry moves to a different bucket, hash chain
  647. walk will terminate. [with a list_head list, it may not since
  648. termination is when the list_head in the original bucket is reached].
  649. Since we redo the d_parent check and compare name while holding
  650. d_lock, lock-free look-up will not race against d_move().
  651. 4. There can be a theoretical race when a dentry keeps coming back
  652. to original bucket due to double moves. Due to this look-up may
  653. consider that it has never moved and can end up in a infinite loop.
  654. But this is not any worse that theoretical livelocks we already
  655. have in the kernel.
  656. Important guidelines for filesystem developers related to dcache_rcu
  657. ====================================================================
  658. 1. Existing dcache interfaces (pre-2.5.62) exported to filesystem
  659. don't change. Only dcache internal implementation changes. However
  660. filesystems *must not* delete from the dentry hash chains directly
  661. using the list macros like allowed earlier. They must use dcache
  662. APIs like d_drop() or __d_drop() depending on the situation.
  663. 2. d_flags is now protected by a per-dentry lock (d_lock). All
  664. access to d_flags must be protected by it.
  665. 3. For a hashed dentry, checking of d_count needs to be protected
  666. by d_lock.
  667. Papers and other documentation on dcache locking
  668. ================================================
  669. 1. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).
  670. 2. http://lse.sourceforge.net/locking/dcache/dcache.html
  671. Resources
  672. =========
  673. (Note some of these resources are not up-to-date with the latest kernel
  674. version.)
  675. Creating Linux virtual filesystems. 2002
  676. <http://lwn.net/Articles/13325/>
  677. The Linux Virtual File-system Layer by Neil Brown. 1999
  678. <http://www.cse.unsw.edu.au/~neilb/oss/linux-commentary/vfs.html>
  679. A tour of the Linux VFS by Michael K. Johnson. 1996
  680. <http://www.tldp.org/LDP/khg/HyperNews/get/fs/vfstour.html>
  681. A small trail through the Linux kernel by Andries Brouwer. 2001
  682. <http://www.win.tue.nl/~aeb/linux/vfs/trail.html>