Browse Source

readahead: make mmap_miss an unsigned int

This makes the performance impact of possible mmap_miss wrap around to be
temporary and tolerable: i.e.  MMAP_LOTSAMISS=100 extra readarounds.

Otherwise if ever mmap_miss wraps around to negative, it takes INT_MAX
cache misses to bring it back to normal state.  During the time mmap
readaround will be _enabled_ for whatever wild random workload.  That's
almost permanent performance impact.

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: Ying Han <yinghan@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Wu Fengguang 16 years ago
parent
commit
1ebf26a9b3
1 changed files with 1 additions and 1 deletions
  1. 1 1
      include/linux/fs.h

+ 1 - 1
include/linux/fs.h

@@ -879,7 +879,7 @@ struct file_ra_state {
 					   there are only # of pages ahead */
 					   there are only # of pages ahead */
 
 
 	unsigned int ra_pages;		/* Maximum readahead window */
 	unsigned int ra_pages;		/* Maximum readahead window */
-	int mmap_miss;			/* Cache miss stat for mmap accesses */
+	unsigned int mmap_miss;		/* Cache miss stat for mmap accesses */
 	loff_t prev_pos;		/* Cache last read() position */
 	loff_t prev_pos;		/* Cache last read() position */
 };
 };