summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2015-06-02writeback: move bandwidth related fields from backing_dev_info into ↵Tejun Heo
bdi_writeback Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback) and the role of the separation is unclear. For cgroup support for writeback IOs, a bdi will be updated to host multiple wb's where each wb serves writeback IOs of a different cgroup on the bdi. To achieve that, a wb should carry all states necessary for servicing writeback IOs for a cgroup independently. This patch moves bandwidth related fields from backing_dev_info into bdi_writeback. * The moved fields are: bw_time_stamp, dirtied_stamp, written_stamp, write_bandwidth, avg_write_bandwidth, dirty_ratelimit, balanced_dirty_ratelimit, completions and dirty_exceeded. * writeback_chunk_size() and over_bground_thresh() now take @wb instead of @bdi. * bdi_writeout_fraction(bdi, ...) -> wb_writeout_fraction(wb, ...) bdi_dirty_limit(bdi, ...) -> wb_dirty_limit(wb, ...) bdi_position_ration(bdi, ...) -> wb_position_ratio(wb, ...) bdi_update_writebandwidth(bdi, ...) -> wb_update_write_bandwidth(wb, ...) [__]bdi_update_bandwidth(bdi, ...) -> [__]wb_update_bandwidth(wb, ...) bdi_{max|min}_pause(bdi, ...) -> wb_{max|min}_pause(wb, ...) bdi_dirty_limits(bdi, ...) -> wb_dirty_limits(wb, ...) * Init/exits of the relocated fields are moved to bdi_wb_init/exit() respectively. Note that explicit zeroing is dropped in the process as wb's are cleared in entirety anyway. * As there's still only one bdi_writeback per backing_dev_info, all uses of bdi->stat[] are mechanically replaced with bdi->wb.stat[] introducing no behavior changes. v2: Typo in description fixed as suggested by Jan. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@kernel.dk> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-06-02writeback: move backing_dev_info->bdi_stat[] into bdi_writebackTejun Heo
Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback) and the role of the separation is unclear. For cgroup support for writeback IOs, a bdi will be updated to host multiple wb's where each wb serves writeback IOs of a different cgroup on the bdi. To achieve that, a wb should carry all states necessary for servicing writeback IOs for a cgroup independently. This patch moves bdi->bdi_stat[] into wb. * enum bdi_stat_item is renamed to wb_stat_item and the prefix of all enums is changed from BDI_ to WB_. * BDI_STAT_BATCH() -> WB_STAT_BATCH() * [__]{add|inc|dec|sum}_wb_stat(bdi, ...) -> [__]{add|inc}_wb_stat(wb, ...) * bdi_stat[_error]() -> wb_stat[_error]() * bdi_writeout_inc() -> wb_writeout_inc() * stat init is moved to bdi_wb_init() and bdi_wb_exit() is added and frees stat. * As there's still only one bdi_writeback per backing_dev_info, all uses of bdi->stat[] are mechanically replaced with bdi->wb.stat[] introducing no behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@kernel.dk> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-06-02writeback: move backing_dev_info->state into bdi_writebackTejun Heo
Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback) and the role of the separation is unclear. For cgroup support for writeback IOs, a bdi will be updated to host multiple wb's where each wb serves writeback IOs of a different cgroup on the bdi. To achieve that, a wb should carry all states necessary for servicing writeback IOs for a cgroup independently. This patch moves bdi->state into wb. * enum bdi_state is renamed to wb_state and the prefix of all enums is changed from BDI_ to WB_. * Explicit zeroing of bdi->state is removed without adding zeoring of wb->state as the whole data structure is zeroed on init anyway. * As there's still only one bdi_writeback per backing_dev_info, all uses of bdi->state are mechanically replaced with bdi->wb.state introducing no behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@kernel.dk> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: drbd-dev@lists.linbit.com Cc: Neil Brown <neilb@suse.de> Cc: Alasdair Kergon <agk@redhat.com> Cc: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-06-02memcg: add per cgroup dirty page accountingGreg Thelen
When modifying PG_Dirty on cached file pages, update the new MEM_CGROUP_STAT_DIRTY counter. This is done in the same places where global NR_FILE_DIRTY is managed. The new memcg stat is visible in the per memcg memory.stat cgroupfs file. The most recent past attempt at this was http://thread.gmane.org/gmane.linux.kernel.cgroups/8632 The new accounting supports future efforts to add per cgroup dirty page throttling and writeback. It also helps an administrator break down a container's memory usage and provides evidence to understand memcg oom kills (the new dirty count is included in memcg oom kill messages). The ability to move page accounting between memcg (memory.move_charge_at_immigrate) makes this accounting more complicated than the global counter. The existing mem_cgroup_{begin,end}_page_stat() lock is used to serialize move accounting with stat updates. Typical update operation: memcg = mem_cgroup_begin_page_stat(page) if (TestSetPageDirty()) { [...] mem_cgroup_update_page_stat(memcg) } mem_cgroup_end_page_stat(memcg) Summary of mem_cgroup_end_page_stat() overhead: - Without CONFIG_MEMCG it's a no-op - With CONFIG_MEMCG and no inter memcg task movement, it's just rcu_read_lock() - With CONFIG_MEMCG and inter memcg task movement, it's rcu_read_lock() + spin_lock_irqsave() A memcg parameter is added to several routines because their callers now grab mem_cgroup_begin_page_stat() which returns the memcg later needed by for mem_cgroup_update_page_stat(). Because mem_cgroup_begin_page_stat() may disable interrupts, some adjustments are needed: - move __mark_inode_dirty() from __set_page_dirty() to its caller. __mark_inode_dirty() locking does not want interrupts disabled. - use spin_lock_irqsave(tree_lock) rather than spin_lock_irq() in __delete_from_page_cache(), replace_page_cache_page(), invalidate_complete_page2(), and __remove_mapping(). text data bss dec hex filename 8925147 1774832 1785856 12485835 be84cb vmlinux-!CONFIG_MEMCG-before 8925339 1774832 1785856 12486027 be858b vmlinux-!CONFIG_MEMCG-after +192 text bytes 8965977 1784992 1785856 12536825 bf4bf9 vmlinux-CONFIG_MEMCG-before 8966750 1784992 1785856 12537598 bf4efe vmlinux-CONFIG_MEMCG-after +773 text bytes Performance tests run on v4.0-rc1-36-g4f671fe2f952. Lower is better for all metrics, they're all wall clock or cycle counts. The read and write fault benchmarks just measure fault time, they do not include I/O time. * CONFIG_MEMCG not set: baseline patched kbuild 1m25.030000(+-0.088% 3 samples) 1m25.426667(+-0.120% 3 samples) dd write 100 MiB 0.859211561 +-15.10% 0.874162885 +-15.03% dd write 200 MiB 1.670653105 +-17.87% 1.669384764 +-11.99% dd write 1000 MiB 8.434691190 +-14.15% 8.474733215 +-14.77% read fault cycles 254.0(+-0.000% 10 samples) 253.0(+-0.000% 10 samples) write fault cycles 2021.2(+-3.070% 10 samples) 1984.5(+-1.036% 10 samples) * CONFIG_MEMCG=y root_memcg: baseline patched kbuild 1m25.716667(+-0.105% 3 samples) 1m25.686667(+-0.153% 3 samples) dd write 100 MiB 0.855650830 +-14.90% 0.887557919 +-14.90% dd write 200 MiB 1.688322953 +-12.72% 1.667682724 +-13.33% dd write 1000 MiB 8.418601605 +-14.30% 8.673532299 +-15.00% read fault cycles 266.0(+-0.000% 10 samples) 266.0(+-0.000% 10 samples) write fault cycles 2051.7(+-1.349% 10 samples) 2049.6(+-1.686% 10 samples) * CONFIG_MEMCG=y non-root_memcg: baseline patched kbuild 1m26.120000(+-0.273% 3 samples) 1m25.763333(+-0.127% 3 samples) dd write 100 MiB 0.861723964 +-15.25% 0.818129350 +-14.82% dd write 200 MiB 1.669887569 +-13.30% 1.698645885 +-13.27% dd write 1000 MiB 8.383191730 +-14.65% 8.351742280 +-14.52% read fault cycles 265.7(+-0.172% 10 samples) 267.0(+-0.000% 10 samples) write fault cycles 2070.6(+-1.512% 10 samples) 2084.4(+-2.148% 10 samples) As expected anon page faults are not affected by this patch. tj: Updated to apply on top of the recent cancel_dirty_page() changes. Signed-off-by: Sha Zhengju <handai.szj@gmail.com> Signed-off-by: Greg Thelen <gthelen@google.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-06-02page_writeback: revive cancel_dirty_page() in a restricted formTejun Heo
cancel_dirty_page() had some issues and b9ea25152e56 ("page_writeback: clean up mess around cancel_dirty_page()") replaced it with account_page_cleaned() which makes the caller responsible for clearing the dirty bit; unfortunately, the planned changes for cgroup writeback support requires synchronization between dirty bit manipulation and stat updates. While we can open-code such synchronization in each account_page_cleaned() callsite, that's gonna be unnecessarily awkward and verbose. This patch revives cancel_dirty_page() but in a more restricted form. All it does is TestClearPageDirty() followed by account_page_cleaned() invocation if the page was dirty. This helper covers all account_page_cleaned() usages except for __delete_from_page_cache() which is a special case anyway and left alone. As this leaves no module user for account_page_cleaned(), EXPORT_SYMBOL() is dropped from it. This patch just revives cancel_dirty_page() as a trivial wrapper to replace equivalent usages and doesn't introduce any functional changes. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-06-02NFS: drop unneeded gotoJulia Lawall
Delete jump to a label on the next line, when that label is not used elsewhere. A simplified version of the semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r@ identifier l; @@ -if (...) goto l; -l: // </smpl> Also drop the unnecessary ret variable. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-06-02NFS: Fix size of NFSACL SETACL operationsChuck Lever
When encoding the NFSACL SETACL operation, reserve just the estimated size of the ACL rather than a fixed maximum. This eliminates needless zero padding on the wire that the server ignores. Fixes: ee5dc7732bd5 ('NFS: Fix "kernel BUG at fs/nfs/nfs3xdr.c:1338!"') Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-06-02NFS: report more appropriate block size for directories.NeilBrown
In glibc 2.21 (and several previous), a call to opendir() will result in a 32K (BUFSIZ*4) buffer being allocated and passed to getdents. However a call to fdopendir() results in an 'fstat' request to determine block size and a matching buffer allocated for subsequent use with getdents. This will typically be 1M. The first getdents call on an NFS directory will always use READDIR_PLUS (or NFSv4 equivalent) if available. Subsequent getdents calls only use this more expensive version if some 'stat' requests are made between the getdents calls. For this reason it is good to keep at least that first getdents call relatively short. When fdopendir() and readdir() is used on a large directory, it takes approximately 32 times as long to complete as using "opendir". Current versions of 'find' use fdopendir() and demonstrate this slowness. 'stat' on a directory currently returns the 'wsize'. This number has no meaning on directories. Actual READDIR requests are limited to ->dtsize, which itself is capped at 4 pages, coincidently the same as BUFSIZ*4. So this is a meaningful number to use as the blocksize on directories, and has the effect of making 'find' on large directories go a lot faster. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-06-02NFSv4: Always drain the slot table before re-establishing the leaseTrond Myklebust
While the NFSv4.1 code has always drained the slot tables in order to stop non-recovery related RPC calls when doing lease recovery, the NFSv4 code did not. The reason for the difference in behaviour is that NFSv4 does not have session state, and so RPC calls can in theory proceed while recovery is happening. In practice, however, anything I/O or state related needs to wait until recovery is over. This patch changes the behaviour of NFSv4 to match that of NFSv4.1 so that we can simplify the state recovery code by assuming that we do not have to deal with races between recovery and ordinary I/O. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-06-02ubifs: fix to check error code of register_shrinkerChao Yu
register_shrinker() in ubifs_init() can fail due to fail to call kzalloc. This patch fixes to check the return value of register_shrinker, otherwise our shrinker may be unregistered after ubifs initialized successfully. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-06-01Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Conflicts: drivers/net/phy/amd-xgbe-phy.c drivers/net/wireless/iwlwifi/Kconfig include/net/mac80211.h iwlwifi/Kconfig and mac80211.h were both trivial overlapping changes. The drivers/net/phy/amd-xgbe-phy.c file got removed in 'net-next' and the bug fix that happened on the 'net' side is already integrated into the rest of the amd-xgbe driver. Signed-off-by: David S. Miller <davem@davemloft.net>
2015-06-01f2fs: disable the discard option when device doesn't supportChenxi Mao
Current f2fs check the whether the blk device can support discard. However, the code will cause the discard option cannot be enabled. Because the clear_opt(sbi, DISCARD) will be invoked forever. This patch can fix this issue. Jaegeuk Kim: The original patch was intended to disable the discard option when device does not support trim command. Rather than remaining the buggy patch, let's replace with this patch as an integrated one. Signed-off-by: Chenxi Mao <chenxi.mao2013@gmail.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: remove alloc_page for bounce_pageJaegeuk Kim
We don't need to call alloc_page() prior to mempool_alloc(), since the mempool_alloc() calls alloc_page() internally. And, if __GFP_WAIT is set, it never fails on page allocation, so let's give GFP_NOWAIT and handle ENOMEM by writepage(). Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: fix a deadlock for summary page lock vs. sentry_lockJaegeuk Kim
In f2fs_gc: In f2fs_replace_block: - lock_page(sum_page) - check_valid_map() - mutex_lock(sentry_lock) - mutex_lock(sentry_lock) - change_curseg() - lock_page(sum_page) This patch fixes the deadlock condition. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: clean up error handling in f2fs_fname_setup_filenameJaegeuk Kim
Sync with: ext4 crypto: clean up error handling in ext4_fname_setup_filename Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: avoid f2fs_inherit_context for symlinkJaegeuk Kim
This patch fixes to call f2fs_inherit_context twice for newly created symlink. The original one is called by f2fs_add_link(), which invokes f2fs_setxattr. If the second one is called again, f2fs_setxattr is triggered again with same encryption index. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: do not set encryption policy for non-directory by ioctlChao Yu
Encryption policy should only be set to an empty directory through ioctl, This patch add a judgement condition to verify type of the target inode to avoid incorrectly configuring for non-directory. Additionally, remove unneeded inline data conversion since regular or symlink file should not be processed here. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: allow setting encryption policy onceChao Yu
This patch add XATTR_CREATE flag in setxattr when setting encryption context for inode. Without this flag the context could be set more than once, this should never happen. So, fix it. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: check context consistent for rename2Chao Yu
For exchange rename, we should check context consistent of encryption between new_dir and old_inode or old_dir and new_inode. Otherwise inheritance of parent's encryption context will be broken. Signed-off-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: sync with ext4 approach] Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: avoid duplicated code by reusing f2fs_read_end_ioChao Yu
This patch tries to clean up code because part code of f2fs_read_end_io and mpage_end_io are the same, so it's better to merge and reuse them. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: use per-inode tfm structureJaegeuk Kim
This patch applies the following ext4 patch: ext4 crypto: use per-inode tfm structure As suggested by Herbert Xu, we shouldn't allocate a new tfm each time we read or write a page. Instead we can use a single tfm hanging off the inode's crypt_info structure for all of our encryption needs for that inode, since the tfm can be used by multiple crypto requests in parallel. Also use cmpxchg() to avoid races that could result in crypt_info structure getting doubly allocated or doubly freed. Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: recovering broken superblock during mounthujianyang
This patch recovers a broken superblock with the other valid one. Signed-off-by: hujianyang <hujianyang@huawei.com> [Jaegeuk Kim: reinitialize local variables in f2fs_fill_super for retrial] Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: check encryption for tmpfileJaegeuk Kim
This patch adds to check encryption for tmpfile in early stage. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: support RENAME_WHITEOUTChao Yu
As the description of rename in manual, RENAME_WHITEOUT is a special operation that only makes sense for overlay/union type filesystem. When performing rename with RENAME_WHITEOUT, dst will be replace with src, and meanwhile, a 'whiteout' will be create with name of src. A "whiteout" is designed to be a char device with 0,0 device number, it has specially meaning for stackable filesystem. In these filesystems, there are multiple layers exist, and only top of these can be modified. So a whiteout in top layer is used to hide a corresponding file in lower layer, as well removal of whiteout will make the file appear. Now in overlayfs, when we rename a file which is exist in lower layer, it will be copied up to upper if it is not on upper layer yet, and then rename it on upper layer, source file will be whiteouted to hide corresponding file in lower layer at the same time. So in upper layer filesystem, implementation of RENAME_WHITEOUT provide a atomic operation for stackable filesystem to support rename operation. There are multiple ways to implement RENAME_WHITEOUT in log of this commit: 7dcf5c3e4527 ("xfs: add RENAME_WHITEOUT support") which pointed out by Dave Chinner. For now, we just try to follow the way that xfs/ext4 use. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: introduce update_meta_pageChao Yu
Add a help function update_meta_page() to update meta page with specified buffer. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: zero next free dnode blockChao Yu
Now page cache of meta inode is used by garbage collection for encrypted page, it may contain random data, so we should zero it before issuing discard. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: split f2fs_crypto_init/exit with two partsJaegeuk Kim
This patch splits f2fs_crypto_init/exit with two parts: base initialization and memory allocation. Firstly, f2fs module declares the base encryption memory pointers. Then, allocating internal memories is done at the first encrypted inode access. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: fix incorrect release for crypto ctxChao Yu
When encryption feature is enable, if we rmmod f2fs module, we will encounter a stack backtrace reported in syslog: "BUG: Bad page state in process rmmod pfn:aaf8a page:f0f4f148 count:0 mapcount:129 mapping:ee2f4104 index:0x80 flags: 0xee2830a4(referenced|lru|slab|private_2|writeback|swapbacked|mlocked) page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set bad because of flags: flags: 0x2030a0(lru|slab|private_2|writeback|mlocked) Modules linked in: f2fs(O-) fuse bnep rfcomm bluetooth dm_crypt binfmt_misc snd_intel8x0 snd_ac97_codec ac97_bus snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer snd_seq_device joydev ppdev mac_hid lp hid_generic i2c_piix4 parport_pc psmouse snd serio_raw parport soundcore ext4 jbd2 mbcache usbhid hid e1000 [last unloaded: f2fs] CPU: 1 PID: 3049 Comm: rmmod Tainted: G B O 4.1.0-rc3+ #10 Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 00000000 00000000 c0021eb4 c15b7518 f0f4f148 c0021ed8 c112e0b7 c1779174 c9b75674 000aaf8a 01b13ce1 c17791a4 f0f4f148 ee2830a4 c0021ef8 c112e3c3 00000000 f0f4f148 c0021f34 f0f4f148 ee2830a4 ef9f0000 c0021f20 c112fdf8 Call Trace: [<c15b7518>] dump_stack+0x41/0x52 [<c112e0b7>] bad_page.part.72+0xa7/0x100 [<c112e3c3>] free_pages_prepare+0x213/0x220 [<c112fdf8>] free_hot_cold_page+0x28/0x120 [<c1073380>] ? try_to_wake_up+0x2b0/0x2b0 [<c112ff15>] __free_pages+0x25/0x30 [<c112c4fd>] mempool_free_pages+0xd/0x10 [<c112c5f1>] mempool_free+0x31/0x90 [<f0f441cf>] f2fs_exit_crypto+0x6f/0xf0 [f2fs] [<f0f456c4>] exit_f2fs_fs+0x23/0x95f [f2fs] [<c10c30e0>] SyS_delete_module+0x130/0x180 [<c11556d6>] ? vm_munmap+0x46/0x60 [<c15bd888>] sysenter_do_call+0x12/0x12" The reason is that: since commit 0827e645fd35 ("f2fs crypto: shrink size of the f2fs_crypto_ctx structure") is merged, some fields in f2fs_crypto_ctx structure are merged into a union as they will never be used simultaneously in write path, read path or on free list. In f2fs_exit_crypto, we traverse each crypto ctx from free list, in this moment, our free_list field in union is valid, but still we will try to release memory space which is pointed by other invalid field in union structure for each ctx. Then the error occurs, let's fix it with this patch. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: fix to release buffer for fname cryptoChao Yu
This patch fixes memory leak issue in error path of f2fs_fname_setup_filename(). Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: shrink size of the f2fs_crypto_ctx structureJaegeuk Kim
This patch integrates the below patch into f2fs. "ext4 crypto: shrink size of the ext4_crypto_ctx structure Some fields are only used when the crypto_ctx is being used on the read path, some are only used on the write path, and some are only used when the structure is on free list. Optimize memory use by using a union." Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: get rid of ci_mode from struct f2fs_crypt_infoJaegeuk Kim
This patch integrates the below patch into f2fs. "ext4 crypto: get rid of ci_mode from struct ext4_crypt_info The ci_mode field was superfluous, and getting rid of it gets rid of an unused hole in the structure." Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs crypto: use slab cachesJaegeuk Kim
This patch integrates the below patch into f2fs. "ext4 crypto: use slab caches Use slab caches the ext4_crypto_ctx and ext4_crypt_info structures for slighly better memory efficiency and debuggability." Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: truncate data blocks for orphan inodeJaegeuk Kim
As Hu reported, F2FS has a space leak problem, when conducting: 1) format a 4GB f2fs partition 2) dd a 3G file, 3) unlink it. So, when doing f2fs_drop_inode(), we need to truncate data blocks before skipping it. We can also drop unused caches assigned to each inode. Reported-by: hujianyang <hujianyang@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: cleanup a confusing indentDan Carpenter
The return was not indented far enough so it looked like it was supposed to go with the other if statement. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: fix building on 32-bit architecturesArnd Bergmann
A bug fix to the debug output extended the type of some local variables to 64-bit, which now causes the kernel to fail building because of missing 64-bit division functions: ERROR: "__aeabi_uldivmod" [fs/f2fs/f2fs.ko] undefined! In the kernel, we have to use div_u64 or do_div to do this, in order to annotate that this is an expensive operation. As the function is only called for debug out, we know this is not performance critical, so it is safe to use div_u64. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: d1f85bd38db19 ("f2fs: avoid value overflow in showing current status") Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: avoid buggy functionsJaegeuk Kim
This patch avoids to use a buggy function for now. It needs to fix them later. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: add compat_ioctl to provide backward compatabilityhujianyang
introduce compat_ioctl to regular files, but doesn't add this functionality to f2fs_dir_operations. While running a 32-bit busybox, I met an error like this: (A is a directory) chattr: reading flags on A: Inappropriate ioctl for device This patch copies compat_ioctl from f2fs_file_operations and fix this problem. Signed-off-by: hujianyang <hujianyang@huawei.com> Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01f2fs: do not issue next dnode discard redundantlyJaegeuk Kim
We have a discard map, so that we can avoid redundant discard issues. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-06-01fixing infinite OPEN loop in 4.0 stateid recoveryOlga Kornievskaia
Problem: When an operation like WRITE receives a BAD_STATEID, even though recovery code clears the RECLAIM_NOGRACE recovery flag before recovering the open state, because of clearing delegation state for the associated inode, nfs_inode_find_state_and_recover() gets called and it makes the same state with RECLAIM_NOGRACE flag again. As a results, when we restart looking over the open states, we end up in the infinite loop instead of breaking out in the next test of state flags. Solution: unset the RECLAIM_NOGRACE set because of calling of nfs_inode_find_state_and_recover() after returning from calling recover_open() function. Signed-off-by: Olga Kornievskaia <kolga@netapp.com> Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-06-01fs: sysfs: don't pass count == 0 to bin file readersVladimir Zapolskiy
If count == 0 bytes are requested by a reader, sysfs_kf_bin_read() deliberately returns 0 without passing a potentially harmful value to some externally defined underlying battr->read() function. However in case of (pos == size && count) the next clause always sets count to 0 and this value is handed over to battr->read(). The change intends to make obsolete (and remove later) a redundant sanity check in battr->read(), if it is present, or add more protection to struct bin_attribute users, who does not care about input arguments. Signed-off-by: Vladimir Zapolskiy <vz@mleia.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-06-01Merge branch 'xfs-sparse-inode' into for-nextDave Chinner
2015-06-01Merge branch 'xfs-misc-fixes-for-4.2' into for-nextDave Chinner
2015-06-01xfs: Clean up xfs_trans_dup_dqinfoNan Jia
Fixed two missing spaces. Signed-off-by: Nan Jia <jiananmail@gmail.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2015-05-31Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs fix from Al Viro: "Off-by-one in d_walk()/__dentry_kill() race fix. It's very hard to hit; possible in the same conditions as the original bug, except that you need the skipped branch to contain all the remaining evictables, so that the d_walk()-calling loop in d_invalidate() decides there's nothing more to do and doesn't go for another pass - otherwise that next pass will sweep the sucker. So it's not too urgent, but seeing that the fix is obvious and the original commit has spread into all -stable branches..." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: d_walk() might skip too much
2015-06-01xfs: don't cast string literalsEric Sandeen
The commit: a9273ca5 xfs: convert attr to use unsigned names added these (unsigned char *) casts, but then the _SIZE macros return "7" - size of a pointer minus one - not the length of the string. This is harmless in the kernel, because the _SIZE macros are not used, but as we sync up with userspace, this will matter. I don't think the cast is necessary; i.e. assigning the string literal to an unsigned char *, or passing it to a function expecting an unsigned char *, should be ok, right? Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2015-06-01xfs: fix quota block reservation leak when tp allocates and frees blocksBrian Foster
Al Viro reports that generic/231 fails frequently on XFS and bisected the problem to the following commit: 5d11fb4b xfs: rework zero range to prevent invalid i_size updates ... which is just the first commit that happens to cause fsx to reproduce the problem. fsx reproduces via zero range calls. The aforementioned commit overhauls zero range to use hole punch and fallocate. As it turns out, the problem is reproducible on demand using basic hole punch as follows: $ mkfs.xfs -f -m crc=1,finobt=1 <dev> $ mount <dev> /mnt -o uquota $ xfs_io -f -c "falloc 0 50m" /mnt/file $ for i in $(seq 1 20); do xfs_io -c "fpunch ${i}m 32k" /mnt/file; done $ rm -f /mnt/file $ repquota -us /mnt ... User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 32K 0K 0K 3 0 0 A file is allocated with a single 50m extent. The extent count increases via hole punches until the bmap converts to btree format. The file is removed but quota reports 32k of space usage for the user. This reservation is effectively leaked for the lifetime of the mount. The reason this occurs is because the quota block reservation tracking is confused when a transaction happens to free and allocate blocks at the same time. Consider the following sequence of events: - tp is allocated from xfs_free_file_space() and reserves several blocks for btree management. Blocks are reserved against the dquot and marked as such in the transaction (qtrx->qt_blk_res). - 8 blocks are accounted free when the 32k range is punched out. xfs_trans_mod_dquot() is called with XFS_TRANS_DQ_BCOUNT and sets ->qt_bcount_delta to -8. - Subsequently, a block is allocated against the same transaction by xfs_bmap_extents_to_btree() for btree conversion. A call to xfs_trans_mod_dquot() increases qt_blk_res_used to 1 and qt_bcount_delta to -7. - The transaction is dup'd and committed by xfs_bmap_finish(). xfs_trans_dup_dqinfo() sets the first transaction up such that it has a matching qt_blk_res and qt_blk_res_used of 1. The remaining unused reservation is transferred to the duplicate tp. When the transactions are committed, the dquots are fixed up in xfs_trans_apply_dquot_deltas() according to one of two methods: 1.) If the transaction holds a block reservation (->qt_blk_res != 0), _only_ the unused portion reservation is unaccounted from the dquot. Note that the tp duplication behavior of xfs_bmap_finish() makes it such that qt_blk_res is typically 0 for tp's with unused reservation. 2.) Otherwise, the dquot is fixed up based on the block delta (->qt_bcount_delta) created by the transaction. Therefore, if a transaction has a negative qt_bcount_delta and positive qt_blk_res_used, the former set of blocks that have been removed from the file are never factored out of the in-core dquot reservation. Instead, *_apply_dquot_deltas() sees 1 block used out of a 1 block reservation and believes there is nothing to fix up. The on-disk d_bcount is updated independently from qt_bcount_delta, and thus is correct (and allows the quota usage to correct on remount). To deal with this situation, we effectively want the "used reservation" part of the transaction to be consistent with any freed blocks with respect to quota tracking. For example, if 8 blocks are freed, the subsequent single block allocation does not need to consume the initial reservation made by the tp. Instead, it simply borrows one from the previously freed. One possible implementation of such borrowing is to avoid the blks_res_used increment when bcount_delta is negative. This alone is flawed logic in that it only handles the case where blocks are freed before allocated, however. Rather than add more complexity to manage synchronization between bcount_delta and blks_res_used, kill the latter entirely. blk_res_used is only updated in one place and always in sync with delta_bcount. Therefore, the net block reservation consumption of the transaction is always available from bcount_delta. Calculate the reservation consumption on the fly where necessary based on whether the tp has a reservation and results in a positive net block delta on the inode. Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2015-06-01xfs: always log the inode on unwritten extent conversionBrian Foster
The fsync() requirements for crash consistency on XFS are to flush file data and force any in-core inode updates to the log. We currently check whether the inode is pinned to identify whether the log needs to be forced, since a non-zero pin count generally represents an inode that has transactions awaiting a flush to the on-disk log. This is not sufficient in all cases, however. Reports of xfstests test generic/311 failures on ppc64/s390x hosts have identified failures to fsync outstanding inode modifications due to the inode not being pinned at the time of the fsync. This occurs because certain bmap updates can complete by logging bmapbt buffers but without ever dirtying (and thus pinning) the core inode. The following is a specific incarnation of this problem: $ mount $dev /mnt -o noatime,nobarrier $ for i in $(seq 0 2 31); do \ xfs_io -f -c "falloc $((i * 32768)) 32k" -c fsync /mnt/file; \ done $ xfs_io -c "pwrite -S 0 80k 16k" -c fsync -c "pwrite 76k 4k" -c fsync /mnt/file; \ hexdump /mnt/file; \ ./xfstests-dev/src/godown /mnt ... 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0013000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd * 0014000 0000 0000 0000 0000 0000 0000 0000 0000 * 00f8000 $ umount /mnt; mount ... $ hexdump /mnt/file 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 00f8000 In short, the unwritten extent conversion for the last write is lost despite the fact that an fsync executed before the filesystem was shutdown. Note that this is impossible to reproduce on v5 supers due to unconditional time callbacks for di_changecount and highly difficult to reproduce on CONFIG_HZ=1000 kernels due to those same callbacks frequently updating cmtime prior to the bmap update. CONFIG_HZ=100 reduces timer granularity enough to increase the odds that time updates are skipped and allows this to reproduce within a handful of attempts. To deal with this problem, unconditionally log the core in the unwritten extent conversion path. Fix up logflags after the extent conversion to keep the extent update code consistent with the other extent update helpers. This fixup is not necessary for the other (hole, delay) extent helpers because they execute in the block allocation codepath, which already logs the inode for other reasons (e.g., for di_nblocks). Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2015-05-31ext4 crypto: release crypto resource on module exitChao Yu
Crypto resource should be released when ext4 module exits, otherwise it will cause memory leak. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-31ext4 crypto: handle unexpected lack of encryption keysTheodore Ts'o
Fix up attempts by users to try to write to a file when they don't have access to the encryption key. Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2015-05-31ext4 crypto: allocate the right amount of memory for the on-disk symlinkTheodore Ts'o
Previously we were taking the required padding when allocating space for the on-disk symlink. This caused a buffer overrun which could trigger a krenel crash when running fsstress. Signed-off-by: Theodore Ts'o <tytso@mit.edu>