summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2009-11-20nilfs2: add norecovery mount optionRyusuke Konishi
This adds "norecovery" mount option which disables temporal write access to read-only mounts or snapshots during mount/recovery. Without this option, write access will be even performed for those types of mounts; the temporal write access is needed to mount root file system read-only after an unclean shutdown. This option will be helpful when user wants to prevent any write access to the device. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Cc: Eric Sandeen <sandeen@redhat.com>
2009-11-20nilfs2: add helper to get if volume is in a valid stateRyusuke Konishi
This adds a helper function, nilfs_valid_fs() which returns if nilfs is in a valid state or not. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: move recovery completion into load_nilfs functionRyusuke Konishi
Although mount recovery of nilfs is integrated in load_nilfs() procedure, the completion of recovery was isolated from the procedure and performed at the end of the fill_super routine. This was somewhat confusing since the recovery is needed for the nilfs object, not for a super block instance. To resolve the inconsistency, this will integrate the recovery completion into load_nilfs(). Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: apply readahead for recovery on mountRyusuke Konishi
This inserts readahead in the recovery code. The readahead request is issued per segment while searching the latest super root block. This will shorten mount time after unclean unmount. A measurement shows the recovery time was reduced by more than 60 percent: e.g. real 0m11.586s -> 0m3.918s (x 2.96) Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: clean up get/put function of a segment usageRyusuke Konishi
This eliminates obsolete nilfs_get_sufile_get_segment_usage() and nilfs_set_sufile_segment_usage() from sufile. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: move routine to set segment usage into sufileRyusuke Konishi
This adds nilfs_sufile_set_segment_usage() function in sufile to replace direct access to the sufile metadata in log writer code. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: move routine marking segment usage dirty into sufileRyusuke Konishi
This adds nilfs_sufile_mark_dirty() function in sufile to replace nilfs_touch_segusage() function in log writer code. This is a preparation for the further cleanup which will move out low level sufile operations in the log writer. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: insert cache operation in palloc get block routinesRyusuke Konishi
This implements cache operation in get block routines of palloc code: nilfs_palloc_get_desc_block(), nilfs_palloc_get_bitmap_block(), and nilfs_palloc_get_entry_block(). This will complete the palloc cache. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: add palloc cache to ifileRyusuke Konishi
This adds the palloc cache to ifile. The palloc cache is allocated on the extended region of nilfs_mdt_info struct. The struct nilfs_ifile_info defines the extended on memory structure of ifile. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: flush palloc cache before manipulating data pages of GC datRyusuke Konishi
Data pages in gcdat metadata file (i.e. the secondary DAT for GC), are cleared or even moved back to the normal DAT when a shot of garbage collection was done. Buffer heads held by the palloc cache of gcdat must be cleared before these page cache manipulation. This adds nilfs_palloc_clear_cache() to ensure this. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: add palloc cache to datRyusuke Konishi
This adds the palloc cache to DAT file. The palloc cache is allocated on the extended region of nilfs_mdt_info struct. The struct nilfs_dat_info defines the extended on memory structure of DAT. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: add cache framework for persistent object allocatorRyusuke Konishi
This adds setup and cleanup routines of the persistent object allocator cache. According to ftrace analyses, accessing buffers of the DAT file suffers indispensable overhead many times. To mitigate the overhead, This introduce cache framework for the persistent object allocator (palloc) which the DAT file and ifile are using. struct nilfs_palloc_cache represents the cache object per metadata file using palloc. The cache is initialized through nilfs_palloc_setup_cache() and destroyed by nilfs_palloc_destroy_cache(); callers of the former function will be added to individual allocators of DAT and ifile on successive patches. nilfs_palloc_destroy_cache() will be called from nilfs_mdt_destroy() if the cache is attached to a metadata file. A companion function nilfs_palloc_clear_cache() is provided to allow releasing buffer head references independently with the cleanup task. This adjunctive function will be used before invalidating pages of metadata file with the cache. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: unfold nilfs_palloc_block_get_bitmap functionRyusuke Konishi
This expands a trivial address calculation in the function into its every callsite. This expansion improves readability of the callers. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: eliminate nilfs_btnode_get functionRyusuke Konishi
This removes the obsolete nilfs_btnode_get() function and makes nilfs_btree_get_block() directly call nilfs_btnode_submit_block(). This expansion will provide better opportunity for code optimization. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: remove newblk argument from nilfs_btnode_submit_blockRyusuke Konishi
This removes the obsolete argument from nilfs_btnode_submit_block(). This will complete separating a create function of btree node. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: use nilfs_btnode_create_block functionRyusuke Konishi
This displaces nilfs_btnode_get() use to create new btree node block with nilfs_btnode_create_block. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: separate function for creating new btree node blockRyusuke Konishi
Adds a separate routine for creating a btree node block. This is a preparation to reduce the depth of function calls during submitting btree node buffer. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: avoid readahead on metadata file for create modeRyusuke Konishi
This turns off readhead action of metadata file if nilfs_mdt_get_block function was called with a create flag. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: simplify nilfs_sufile_get_ncleansegs functionRyusuke Konishi
Previously, this function took an status code to return possible error codes. The ("nilfs2: add local variable to cache the number of clean segments") patch removed the possibility to return errors. So, this simplifies the function definition to make it directly return the number of clean segments. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: add local variable to cache the number of clean segmentsRyusuke Konishi
This makes it possible for sufile to get the number of clean segments faster. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: unfold nilfs_sufile_block_get_header functionRyusuke Konishi
This unfolds the nilfs_sufile_block_get_header() function for simplicity. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: hide nilfs_mdt_clear calls in nilfs_mdt_destroyRyusuke Konishi
This will hide a function call of nilfs_mdt_clear() in nilfs_mdt_destroy(). This ensures nilfs_mdt_destroy() to do cleanup jobs included in nilfs_mdt_clear(). Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: eliminate inlines to directly read/write inode of metadata filesRyusuke Konishi
Removes two inline functions: nilfs_mdt_read_inode_direct() and nilfs_mdt_write_inode_direct(). Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: separate read method of meta data files on super root blockRyusuke Konishi
Will displace nilfs_mdt_read_inode_direct function with an individual read method: nilfs_dat_read, nilfs_sufile_read, nilfs_cpfile_read. This provides the opportunity to initialize local variables of each metadata file after reading the inode. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: separate constructor of metadata filesRyusuke Konishi
This will displace nilfs_mdt_new() constructor with individual metadata file constructors like nilfs_dat_new(), new_sufile_new(), nilfs_cpfile_new(), and nilfs_ifile_new(). This makes it possible for each metadata file to have own intialization code. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: add size option of private object to metadata file allocatorRyusuke Konishi
This adds an optional "object size" argument to nilfs_mdt_new_common() function; the argument specifies the size of private object attached to a newly allocated metadata file inode. This will afford space to keep local variables for meta data files. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: move out mark_inode_dirty calls from bmap routinesRyusuke Konishi
Previously, nilfs_bmap_add_blocks() and nilfs_bmap_sub_blocks() called mark_inode_dirty() after they changed the number of data blocks. This moves these calls outside bmap outermost functions like nilfs_bmap_insert() or nilfs_bmap_truncate(). This will mitigate overhead for truncate or delete operation since they repeatedly remove set of blocks. Nearly 10 percent improvement was observed for removal of a large file: # dd if=/dev/zero of=/test/aaa bs=1M count=512 # time rm /test/aaa real 2.968s -> 2.705s Further optimization may be possible by eliminating these mark_inode_dirty() uses though I avoid mixing separate changes here. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: stop marking metadata inode dirty within btree operationsRyusuke Konishi
Since metadata file routines mark the inode dirty after they successfully changed bmap objects, nilfs_mdt_mark_dirty() calls in nilfs_bmap_add_blocks() and nilfs_bmap_sub_blocks() are redundant. This removes these overlapping calls from the bmap routines. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: remove buffer locking from btree codeRyusuke Konishi
lock_buffer() and unlock_buffer() uses in btree.c are eliminable because btree functions gain buffer heads through nilfs_btnode_get(), which never returns an on-the-fly buffer. Although nilfs_clear_dirty_page() and nilfs_copy_back_pages() in nilfs_commit_gcdat_inode() juggle btree node buffers of DAT, this is safe because these operations are protected by a log writer lock or the metadata file semaphore of DAT. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: remove buffer locking in nilfs_mark_inode_dirtyRyusuke Konishi
This lock is eliminable because inodes on the buffer can be updated independently. Although a log writer also fills in bmap data on the on-disk inodes, this update is exclusively done by a log writer lock. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: cleanup unused match_bool functionJiro SEKIBA
match_bool function is not used anymore. Signed-off-by: Jiro SEKIBA <jir@unicus.jp> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: Using nobarrier option instead of barrier=offJiro SEKIBA
Since most of fs using nofoobar style option, modified barrier=off option as nobarrier. Signed-off-by: Jiro SEKIBA <jir@unicus.jp> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: move definition of struct nilfs_btree_nodeJiro SEKIBA
This is a trivial patch to expose struct nilfs_fs_btree_node. The struct should be exposed outside of kernel, for it is disk format. Signed-off-by: Jiro SEKIBA <jir@unicus.jp> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-20nilfs2: get rid of BUG_ON use in btree lookup routinesRyusuke Konishi
The current btree lookup routines make a kernel oops when detected inconsistency in btree blocks. These routines should instead return a proper error code because the inconsistency usually comes from corruption of on-disk metadata. This fixes the issue by converting BUG_ON calls to proper error handlings. Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
2009-11-19Merge branch 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6Linus Torvalds
* 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6: SUNRPC: Address buffer overrun in rpc_uaddr2sockaddr() NFSv4: Fix a cache validation bug which causes getcwd() to return ENOENT
2009-11-19ext4: make "norecovery" an alias for "noload"Eric Sandeen
Users on the linux-ext4 list recently complained about differences across filesystems w.r.t. how to mount without a journal replay. In the discussion it was noted that xfs's "norecovery" option is perhaps more descriptively accurate than "noload," so let's make that an alias for ext4. Also show this status in /proc/mounts Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2009-11-19ext4: make trim/discard optional (and off by default)Eric Sandeen
It is anticipated that when sb_issue_discard starts doing real work on trim-capable devices, we may see issues. Make this mount-time optional, and default it to off until we know that things are working out OK. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2009-11-23ext4: fix error handling in ext4_ind_get_blocks()Jan Kara
When an error happened in ext4_splice_branch we failed to notice that in ext4_ind_get_blocks and mapped the buffer anyway. Fix the problem by checking for error properly. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: stable@kernel.org
2009-11-23ext4: avoid issuing unnecessary barriersTheodore Ts'o
We don't to issue an I/O barrier on an error or if we force commit because we are doing data journaling. Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Cc: Jan Kara <jack@suse.cz> Cc: stable@kernel.org
2009-11-19CacheFiles: Don't log lookup/create failing with ENOBUFSDavid Howells
Don't log the CacheFiles lookup/create object routined failing with ENOBUFS as under high memory load or high cache load they can do this quite a lot. This error simply means that the requested object cannot be created on disk due to lack of space, or due to failure of the backing filesystem to find sufficient resources. Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19CacheFiles: Catch an overly long wait for an old active objectDavid Howells
Catch an overly long wait for an old, dying active object when we want to replace it with a new one. The probability is that all the slow-work threads are hogged, and the delete can't get a look in. What we do instead is: (1) if there's nothing in the slow work queue, we sleep until either the dying object has finished dying or there is something in the slow work queue behind which we can queue our object. (2) if there is something in the slow work queue, we return ETIMEDOUT to fscache_lookup_object(), which then puts us back on the slow work queue, presumably behind the deletion that we're blocked by. We are then deferred for a while until we work our way back through the queue - without blocking a slow-work thread unnecessarily. A backtrace similar to the following may appear in the log without this patch: INFO: task kslowd004:5711 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kslowd004 D 0000000000000000 0 5711 2 0x00000080 ffff88000340bb80 0000000000000046 ffff88002550d000 0000000000000000 ffff88002550d000 0000000000000007 ffff88000340bfd8 ffff88002550d2a8 000000000000ddf0 00000000000118c0 00000000000118c0 ffff88002550d2a8 Call Trace: [<ffffffff81058e21>] ? trace_hardirqs_on+0xd/0xf [<ffffffffa011c4d8>] ? cachefiles_wait_bit+0x0/0xd [cachefiles] [<ffffffffa011c4e1>] cachefiles_wait_bit+0x9/0xd [cachefiles] [<ffffffff81353153>] __wait_on_bit+0x43/0x76 [<ffffffff8111ae39>] ? ext3_xattr_get+0x1ec/0x270 [<ffffffff813531ef>] out_of_line_wait_on_bit+0x69/0x74 [<ffffffffa011c4d8>] ? cachefiles_wait_bit+0x0/0xd [cachefiles] [<ffffffff8104c125>] ? wake_bit_function+0x0/0x2e [<ffffffffa011bc79>] cachefiles_mark_object_active+0x203/0x23b [cachefiles] [<ffffffffa011c209>] cachefiles_walk_to_object+0x558/0x827 [cachefiles] [<ffffffffa011a429>] cachefiles_lookup_object+0xac/0x12a [cachefiles] [<ffffffffa00aa1e9>] fscache_lookup_object+0x1c7/0x214 [fscache] [<ffffffffa00aafc5>] fscache_object_state_machine+0xa5/0x52d [fscache] [<ffffffffa00ab4ac>] fscache_object_slow_work_execute+0x5f/0xa0 [fscache] [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1 [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34 [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308 [<ffffffff8104be91>] kthread+0x7a/0x82 [<ffffffff8100beda>] child_rip+0xa/0x20 [<ffffffff8100b87c>] ? restore_args+0x0/0x30 [<ffffffff8104be17>] ? kthread+0x0/0x82 [<ffffffff8100bed0>] ? child_rip+0x0/0x20 1 lock held by kslowd004/5711: #0: (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffffa011be64>] cachefiles_walk_to_object+0x1b3/0x827 [cachefiles] Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19CacheFiles: Better showing of debugging information in active object problemsDavid Howells
Show more debugging information if cachefiles_mark_object_active() is asked to activate an active object. This may happen, for instance, if the netfs tries to register an object with the same key multiple times. The code is changed to (a) get the appropriate object lock to protect the cookie pointer whilst we dereference it, and (b) get and display the cookie key if available. Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19CacheFiles: Mark parent directory locks as I_MUTEX_PARENT to keep lockdep happyDavid Howells
Mark parent directory locks as I_MUTEX_PARENT in the callers of cachefiles_bury_object() so that lockdep doesn't complain when that invokes vfs_unlink(): ============================================= [ INFO: possible recursive locking detected ] 2.6.32-rc6-cachefs #47 --------------------------------------------- kslowd002/3089 is trying to acquire lock: (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810bbf72>] vfs_unlink+0x8b/0x128 but task is already holding lock: (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffffa00e4e61>] cachefiles_walk_to_object+0x1b0/0x831 [cachefiles] other info that might help us debug this: 1 lock held by kslowd002/3089: #0: (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffffa00e4e61>] cachefiles_walk_to_object+0x1b0/0x831 [cachefiles] stack backtrace: Pid: 3089, comm: kslowd002 Not tainted 2.6.32-rc6-cachefs #47 Call Trace: [<ffffffff8105ad7b>] __lock_acquire+0x1649/0x16e3 [<ffffffff8118170e>] ? inode_has_perm+0x5f/0x61 [<ffffffff8105ae6c>] lock_acquire+0x57/0x6d [<ffffffff810bbf72>] ? vfs_unlink+0x8b/0x128 [<ffffffff81353ac3>] mutex_lock_nested+0x54/0x292 [<ffffffff810bbf72>] ? vfs_unlink+0x8b/0x128 [<ffffffff8118179e>] ? selinux_inode_permission+0x8e/0x90 [<ffffffff8117e271>] ? security_inode_permission+0x1c/0x1e [<ffffffff810bb4fb>] ? inode_permission+0x99/0xa5 [<ffffffff810bbf72>] vfs_unlink+0x8b/0x128 [<ffffffff810adb19>] ? kfree+0xed/0xf9 [<ffffffffa00e3f00>] cachefiles_bury_object+0xb6/0x420 [cachefiles] [<ffffffff81058e21>] ? trace_hardirqs_on+0xd/0xf [<ffffffffa00e7e24>] ? cachefiles_check_object_xattr+0x233/0x293 [cachefiles] [<ffffffffa00e51b0>] cachefiles_walk_to_object+0x4ff/0x831 [cachefiles] [<ffffffff81032238>] ? finish_task_switch+0x0/0xb2 [<ffffffffa00e3429>] cachefiles_lookup_object+0xac/0x12a [cachefiles] [<ffffffffa00741e9>] fscache_lookup_object+0x1c7/0x214 [fscache] [<ffffffffa0074fc5>] fscache_object_state_machine+0xa5/0x52d [fscache] [<ffffffffa00754ac>] fscache_object_slow_work_execute+0x5f/0xa0 [fscache] [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1 [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34 [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308 [<ffffffff8104be91>] kthread+0x7a/0x82 [<ffffffff8100beda>] child_rip+0xa/0x20 [<ffffffff8100b87c>] ? restore_args+0x0/0x30 [<ffffffff8104be17>] ? kthread+0x0/0x82 [<ffffffff8100bed0>] ? child_rip+0x0/0x20 Signed-off-by: Daivd Howells <dhowells@redhat.com>
2009-11-19CacheFiles: Handle truncate unlocking the page we're readingDavid Howells
Handle truncate unlocking the page we're attempting to read from the backing device before the read has completed. This was causing reports like the following to occur: Pid: 4765, comm: kslowd Not tainted 2.6.30.1 #1 Call Trace: [<ffffffffa0331d7a>] ? cachefiles_read_waiter+0xd9/0x147 [cachefiles] [<ffffffff804b74bd>] ? __wait_on_bit+0x60/0x6f [<ffffffff8022bbbb>] ? __wake_up_common+0x3f/0x71 [<ffffffff8022cc32>] ? __wake_up+0x30/0x44 [<ffffffff8024a41f>] ? __wake_up_bit+0x28/0x2d [<ffffffffa003a793>] ? ext3_truncate+0x4d7/0x8ed [ext3] [<ffffffff80281f90>] ? pagevec_lookup+0x17/0x1f [<ffffffff8028c2ff>] ? unmap_mapping_range+0x59/0x1ff [<ffffffff8022cc32>] ? __wake_up+0x30/0x44 [<ffffffff8028e286>] ? vmtruncate+0xc2/0xe2 [<ffffffff802b82cf>] ? inode_setattr+0x22/0x10a [<ffffffffa003baa5>] ? ext3_setattr+0x17b/0x1e6 [ext3] [<ffffffff802b853d>] ? notify_change+0x186/0x2c9 [<ffffffffa032d9de>] ? cachefiles_attr_changed+0x133/0x1cd [cachefiles] [<ffffffffa032df7f>] ? cachefiles_lookup_object+0xcf/0x12a [cachefiles] [<ffffffffa0318165>] ? fscache_lookup_object+0x110/0x122 [fscache] [<ffffffffa03188c3>] ? fscache_object_slow_work_execute+0x590/0x6bc [fscache] [<ffffffff80278f82>] ? slow_work_thread+0x285/0x43a [<ffffffff8024a446>] ? autoremove_wake_function+0x0/0x2e [<ffffffff80278cfd>] ? slow_work_thread+0x0/0x43a [<ffffffff8024a317>] ? kthread+0x54/0x81 [<ffffffff8020c93a>] ? child_rip+0xa/0x20 [<ffffffff8024a2c3>] ? kthread+0x0/0x81 [<ffffffff8020c930>] ? child_rip+0x0/0x20 CacheFiles: I/O Error: Readpage failed on backing file 200000000000810 FS-Cache: Cache cachefiles stopped due to I/O error Reported-by: Christian Kujau <lists@nerdbynature.de> Reported-by: Takashi Iwai <tiwai@suse.de> Reported-by: Duc Le Minh <duclm.vn@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19CacheFiles: Don't write a full page if there's only a partial page to cacheDavid Howells
cachefiles_write_page() writes a full page to the backing file for the last page of the netfs file, even if the netfs file's last page is only a partial page. This causes the EOF on the backing file to be extended beyond the EOF of the netfs, and thus the backing file will be truncated by cachefiles_attr_changed() called from cachefiles_lookup_object(). So we need to limit the write we make to the backing file on that last page such that it doesn't push the EOF too far. Also, if a backing file that has a partial page at the end is expanded, we discard the partial page and refetch it on the basis that we then have a hole in the file with invalid data, and should the power go out... A better way to deal with this could be to record a note that the partial page contains invalid data until the correct data is written into it. This isn't a problem for netfs's that discard the whole backing file if the file size changes (such as NFS). Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19FS-Cache: Actually requeue an object when requestedDavid Howells
FS-Cache objects have an FSCACHE_OBJECT_EV_REQUEUE event that can theoretically be raised to ask the state machine to requeue the object for further processing before the work function returns to the slow-work facility. However, fscache_object_work_execute() was clearing that bit before checking the event mask to see whether the object has any pending events that require it to be requeued immediately. Instead, the bit should be cleared after the check and enqueue. Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19FS-Cache: Start processing an object's operations on that object's deathDavid Howells
Start processing an object's operations when that object moves into the DYING state as the object cannot be destroyed until all its outstanding operations have completed. Furthermore, make sure that read and allocation operations handle being woken up on a dead object. Such events are recorded in the Allocs.abt and Retrvls.abt statistics as viewable through /proc/fs/fscache/stats. The code for waiting for object activation for the read and allocation operations is also extracted into its own function as it is much the same in all cases, differing only in the stats incremented. Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19FS-Cache: Make sure FSCACHE_COOKIE_LOOKING_UP cleared on lookup failureDavid Howells
We must make sure that FSCACHE_COOKIE_LOOKING_UP is cleared on lookup failure (if an object reaches the LC_DYING state), and we should clear it before clearing FSCACHE_COOKIE_CREATING. If this doesn't happen then fscache_wait_for_deferred_lookup() may hold allocation and retrieval operations indefinitely until they're interrupted by signals - which in turn pins the dying object until they go away. Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19FS-Cache: Add a retirement stat counterDavid Howells
Add a stat counter to count retirement events rather than ordinary release events (the retire argument to fscache_relinquish_cookie()). Signed-off-by: David Howells <dhowells@redhat.com>
2009-11-19FS-Cache: Handle pages pending storage that get evicted under OOM conditionsDavid Howells
Handle netfs pages that the vmscan algorithm wants to evict from the pagecache under OOM conditions, but that are waiting for write to the cache. Under these conditions, vmscan calls the releasepage() function of the netfs, asking if a page can be discarded. The problem is typified by the following trace of a stuck process: kslowd005 D 0000000000000000 0 4253 2 0x00000080 ffff88001b14f370 0000000000000046 ffff880020d0d000 0000000000000007 0000000000000006 0000000000000001 ffff88001b14ffd8 ffff880020d0d2a8 000000000000ddf0 00000000000118c0 00000000000118c0 ffff880020d0d2a8 Call Trace: [<ffffffffa00782d8>] __fscache_wait_on_page_write+0x8b/0xa7 [fscache] [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34 [<ffffffffa0078240>] ? __fscache_check_page_write+0x63/0x70 [fscache] [<ffffffffa00b671d>] nfs_fscache_release_page+0x4e/0xc4 [nfs] [<ffffffffa00927f0>] nfs_release_page+0x3c/0x41 [nfs] [<ffffffff810885d3>] try_to_release_page+0x32/0x3b [<ffffffff81093203>] shrink_page_list+0x316/0x4ac [<ffffffff8109372b>] shrink_inactive_list+0x392/0x67c [<ffffffff813532fa>] ? __mutex_unlock_slowpath+0x100/0x10b [<ffffffff81058df0>] ? trace_hardirqs_on_caller+0x10c/0x130 [<ffffffff8135330e>] ? mutex_unlock+0x9/0xb [<ffffffff81093aa2>] shrink_list+0x8d/0x8f [<ffffffff81093d1c>] shrink_zone+0x278/0x33c [<ffffffff81052d6c>] ? ktime_get_ts+0xad/0xba [<ffffffff81094b13>] try_to_free_pages+0x22e/0x392 [<ffffffff81091e24>] ? isolate_pages_global+0x0/0x212 [<ffffffff8108e743>] __alloc_pages_nodemask+0x3dc/0x5cf [<ffffffff81089529>] grab_cache_page_write_begin+0x65/0xaa [<ffffffff8110f8c0>] ext3_write_begin+0x78/0x1eb [<ffffffff81089ec5>] generic_file_buffered_write+0x109/0x28c [<ffffffff8103cb69>] ? current_fs_time+0x22/0x29 [<ffffffff8108a509>] __generic_file_aio_write+0x350/0x385 [<ffffffff8108a588>] ? generic_file_aio_write+0x4a/0xae [<ffffffff8108a59e>] generic_file_aio_write+0x60/0xae [<ffffffff810b2e82>] do_sync_write+0xe3/0x120 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34 [<ffffffff810b18e1>] ? __dentry_open+0x1a5/0x2b8 [<ffffffff810b1a76>] ? dentry_open+0x82/0x89 [<ffffffffa00e693c>] cachefiles_write_page+0x298/0x335 [cachefiles] [<ffffffffa0077147>] fscache_write_op+0x178/0x2c2 [fscache] [<ffffffffa0075656>] fscache_op_execute+0x7a/0xd1 [fscache] [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1 [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34 [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308 [<ffffffff8104be91>] kthread+0x7a/0x82 [<ffffffff8100beda>] child_rip+0xa/0x20 [<ffffffff8100b87c>] ? restore_args+0x0/0x30 [<ffffffff8102ef83>] ? tg_shares_up+0x171/0x227 [<ffffffff8104be17>] ? kthread+0x0/0x82 [<ffffffff8100bed0>] ? child_rip+0x0/0x20 In the above backtrace, the following is happening: (1) A page storage operation is being executed by a slow-work thread (fscache_write_op()). (2) FS-Cache farms the operation out to the cache to perform (cachefiles_write_page()). (3) CacheFiles is then calling Ext3 to perform the actual write, using Ext3's standard write (do_sync_write()) under KERNEL_DS directly from the netfs page. (4) However, for Ext3 to perform the write, it must allocate some memory, in particular, it must allocate at least one page cache page into which it can copy the data from the netfs page. (5) Under OOM conditions, the memory allocator can't immediately come up with a page, so it uses vmscan to find something to discard (try_to_free_pages()). (6) vmscan finds a clean netfs page it might be able to discard (possibly the one it's trying to write out). (7) The netfs is called to throw the page away (nfs_release_page()) - but it's called with __GFP_WAIT, so the netfs decides to wait for the store to complete (__fscache_wait_on_page_write()). (8) This blocks a slow-work processing thread - possibly against itself. The system ends up stuck because it can't write out any netfs pages to the cache without allocating more memory. To avoid this, we make FS-Cache cancel some writes that aren't in the middle of actually being performed. This means that some data won't make it into the cache this time. To support this, a new FS-Cache function is added fscache_maybe_release_page() that replaces what the netfs releasepage() functions used to do with respect to the cache. The decisions fscache_maybe_release_page() makes are counted and displayed through /proc/fs/fscache/stats on a line labelled "VmScan". There are four counters provided: "nos=N" - pages that weren't pending storage; "gon=N" - pages that were pending storage when we first looked, but weren't by the time we got the object lock; "bsy=N" - pages that we ignored as they were actively being written when we looked; and "can=N" - pages that we cancelled the storage of. What I'd really like to do is alter the behaviour of the cancellation heuristics, depending on how necessary it is to expel pages. If there are plenty of other pages that aren't waiting to be written to the cache that could be ejected first, then it would be nice to hold up on immediate cancellation of cache writes - but I don't see a way of doing that. Signed-off-by: David Howells <dhowells@redhat.com>