From c4843a7593a9df3ff5b1806084cefdfa81dd7c79 Mon Sep 17 00:00:00 2001 From: Greg Thelen Date: Fri, 22 May 2015 17:13:16 -0400 Subject: memcg: add per cgroup dirty page accounting When modifying PG_Dirty on cached file pages, update the new MEM_CGROUP_STAT_DIRTY counter. This is done in the same places where global NR_FILE_DIRTY is managed. The new memcg stat is visible in the per memcg memory.stat cgroupfs file. The most recent past attempt at this was http://thread.gmane.org/gmane.linux.kernel.cgroups/8632 The new accounting supports future efforts to add per cgroup dirty page throttling and writeback. It also helps an administrator break down a container's memory usage and provides evidence to understand memcg oom kills (the new dirty count is included in memcg oom kill messages). The ability to move page accounting between memcg (memory.move_charge_at_immigrate) makes this accounting more complicated than the global counter. The existing mem_cgroup_{begin,end}_page_stat() lock is used to serialize move accounting with stat updates. Typical update operation: memcg = mem_cgroup_begin_page_stat(page) if (TestSetPageDirty()) { [...] mem_cgroup_update_page_stat(memcg) } mem_cgroup_end_page_stat(memcg) Summary of mem_cgroup_end_page_stat() overhead: - Without CONFIG_MEMCG it's a no-op - With CONFIG_MEMCG and no inter memcg task movement, it's just rcu_read_lock() - With CONFIG_MEMCG and inter memcg task movement, it's rcu_read_lock() + spin_lock_irqsave() A memcg parameter is added to several routines because their callers now grab mem_cgroup_begin_page_stat() which returns the memcg later needed by for mem_cgroup_update_page_stat(). Because mem_cgroup_begin_page_stat() may disable interrupts, some adjustments are needed: - move __mark_inode_dirty() from __set_page_dirty() to its caller. __mark_inode_dirty() locking does not want interrupts disabled. - use spin_lock_irqsave(tree_lock) rather than spin_lock_irq() in __delete_from_page_cache(), replace_page_cache_page(), invalidate_complete_page2(), and __remove_mapping(). text data bss dec hex filename 8925147 1774832 1785856 12485835 be84cb vmlinux-!CONFIG_MEMCG-before 8925339 1774832 1785856 12486027 be858b vmlinux-!CONFIG_MEMCG-after +192 text bytes 8965977 1784992 1785856 12536825 bf4bf9 vmlinux-CONFIG_MEMCG-before 8966750 1784992 1785856 12537598 bf4efe vmlinux-CONFIG_MEMCG-after +773 text bytes Performance tests run on v4.0-rc1-36-g4f671fe2f952. Lower is better for all metrics, they're all wall clock or cycle counts. The read and write fault benchmarks just measure fault time, they do not include I/O time. * CONFIG_MEMCG not set: baseline patched kbuild 1m25.030000(+-0.088% 3 samples) 1m25.426667(+-0.120% 3 samples) dd write 100 MiB 0.859211561 +-15.10% 0.874162885 +-15.03% dd write 200 MiB 1.670653105 +-17.87% 1.669384764 +-11.99% dd write 1000 MiB 8.434691190 +-14.15% 8.474733215 +-14.77% read fault cycles 254.0(+-0.000% 10 samples) 253.0(+-0.000% 10 samples) write fault cycles 2021.2(+-3.070% 10 samples) 1984.5(+-1.036% 10 samples) * CONFIG_MEMCG=y root_memcg: baseline patched kbuild 1m25.716667(+-0.105% 3 samples) 1m25.686667(+-0.153% 3 samples) dd write 100 MiB 0.855650830 +-14.90% 0.887557919 +-14.90% dd write 200 MiB 1.688322953 +-12.72% 1.667682724 +-13.33% dd write 1000 MiB 8.418601605 +-14.30% 8.673532299 +-15.00% read fault cycles 266.0(+-0.000% 10 samples) 266.0(+-0.000% 10 samples) write fault cycles 2051.7(+-1.349% 10 samples) 2049.6(+-1.686% 10 samples) * CONFIG_MEMCG=y non-root_memcg: baseline patched kbuild 1m26.120000(+-0.273% 3 samples) 1m25.763333(+-0.127% 3 samples) dd write 100 MiB 0.861723964 +-15.25% 0.818129350 +-14.82% dd write 200 MiB 1.669887569 +-13.30% 1.698645885 +-13.27% dd write 1000 MiB 8.383191730 +-14.65% 8.351742280 +-14.52% read fault cycles 265.7(+-0.172% 10 samples) 267.0(+-0.000% 10 samples) write fault cycles 2070.6(+-1.512% 10 samples) 2084.4(+-2.148% 10 samples) As expected anon page faults are not affected by this patch. tj: Updated to apply on top of the recent cancel_dirty_page() changes. Signed-off-by: Sha Zhengju Signed-off-by: Greg Thelen Signed-off-by: Tejun Heo Signed-off-by: Jens Axboe --- Documentation/cgroups/memory.txt | 1 + 1 file changed, 1 insertion(+) (limited to 'Documentation') diff --git a/Documentation/cgroups/memory.txt b/Documentation/cgroups/memory.txt index f456b4315e86..ff71e16cc752 100644 --- a/Documentation/cgroups/memory.txt +++ b/Documentation/cgroups/memory.txt @@ -493,6 +493,7 @@ pgpgin - # of charging events to the memory cgroup. The charging pgpgout - # of uncharging events to the memory cgroup. The uncharging event happens each time a page is unaccounted from the cgroup. swap - # of bytes of swap usage +dirty - # of bytes that are waiting to get written back to the disk. writeback - # of bytes of file/anon cache that are queued for syncing to disk. inactive_anon - # of bytes of anonymous and swap cache memory on inactive -- cgit From 3e1534cf4a2a8278e811e7c84a79da1a02347b8b Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Tue, 16 Jun 2015 18:48:32 -0400 Subject: writeback, blkio: add documentation for cgroup writeback support Update Documentation/cgroups/blkio-controller.txt to reflect the recently added cgroup writeback support. Signed-off-by: Tejun Heo Cc: Li Zefan Cc: Vivek Goyal Cc: cgroups@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Signed-off-by: Jens Axboe --- Documentation/cgroups/blkio-controller.txt | 83 ++++++++++++++++++++++++++++-- 1 file changed, 78 insertions(+), 5 deletions(-) (limited to 'Documentation') diff --git a/Documentation/cgroups/blkio-controller.txt b/Documentation/cgroups/blkio-controller.txt index cd556b914786..68b6a6a470b0 100644 --- a/Documentation/cgroups/blkio-controller.txt +++ b/Documentation/cgroups/blkio-controller.txt @@ -387,8 +387,81 @@ groups and put applications in that group which are not driving enough IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle on individual groups and throughput should improve. -What works -========== -- Currently only sync IO queues are support. All the buffered writes are - still system wide and not per group. Hence we will not see service - differentiation between buffered writes between groups. +Writeback +========= + +Page cache is dirtied through buffered writes and shared mmaps and +written asynchronously to the backing filesystem by the writeback +mechanism. Writeback sits between the memory and IO domains and +regulates the proportion of dirty memory by balancing dirtying and +write IOs. + +On traditional cgroup hierarchies, relationships between different +controllers cannot be established making it impossible for writeback +to operate accounting for cgroup resource restrictions and all +writeback IOs are attributed to the root cgroup. + +If both the blkio and memory controllers are used on the v2 hierarchy +and the filesystem supports cgroup writeback, writeback operations +correctly follow the resource restrictions imposed by both memory and +blkio controllers. + +Writeback examines both system-wide and per-cgroup dirty memory status +and enforces the more restrictive of the two. Also, writeback control +parameters which are absolute values - vm.dirty_bytes and +vm.dirty_background_bytes - are distributed across cgroups according +to their current writeback bandwidth. + +There's a peculiarity stemming from the discrepancy in ownership +granularity between memory controller and writeback. While memory +controller tracks ownership per page, writeback operates on inode +basis. cgroup writeback bridges the gap by tracking ownership by +inode but migrating ownership if too many foreign pages, pages which +don't match the current inode ownership, have been encountered while +writing back the inode. + +This is a conscious design choice as writeback operations are +inherently tied to inodes making strictly following page ownership +complicated and inefficient. The only use case which suffers from +this compromise is multiple cgroups concurrently dirtying disjoint +regions of the same inode, which is an unlikely use case and decided +to be unsupported. Note that as memory controller assigns page +ownership on the first use and doesn't update it until the page is +released, even if cgroup writeback strictly follows page ownership, +multiple cgroups dirtying overlapping areas wouldn't work as expected. +In general, write-sharing an inode across multiple cgroups is not well +supported. + +Filesystem support for cgroup writeback +--------------------------------------- + +A filesystem can make writeback IOs cgroup-aware by updating +address_space_operations->writepage[s]() to annotate bio's using the +following two functions. + +* wbc_init_bio(@wbc, @bio) + + Should be called for each bio carrying writeback data and associates + the bio with the inode's owner cgroup. Can be called anytime + between bio allocation and submission. + +* wbc_account_io(@wbc, @page, @bytes) + + Should be called for each data segment being written out. While + this function doesn't care exactly when it's called during the + writeback session, it's the easiest and most natural to call it as + data segments are added to a bio. + +With writeback bio's annotated, cgroup support can be enabled per +super_block by setting MS_CGROUPWB in ->s_flags. This allows for +selective disabling of cgroup writeback support which is helpful when +certain filesystem features, e.g. journaled data mode, are +incompatible. + +wbc_init_bio() binds the specified bio to its cgroup. Depending on +the configuration, the bio may be executed at a lower priority and if +the writeback session is holding shared resources, e.g. a journal +entry, may lead to priority inversion. There is no one easy solution +for the problem. Filesystems can try to work around specific problem +cases by skipping wbc_init_bio() or using bio_associate_blkcg() +directly. -- cgit