summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-11-18perf probe: Generate event name with line numberMasami Hiramatsu
Generate event name from function name with line number as <function>_L<line_number>. Note that this is only for the new event which is defined by the line number of function (except for line 0). If there is another event on same line, you have to use "-f" option. In that case, the new event has "_1" suffix. e.g. # perf probe -a kernel_read:2 Added new event: probe:kernel_read_L2 (on kernel_read:2) You can now use it in all perf tools, such as: perf record -e probe:kernel_read_L2 -aR sleep 1 But if we omit the line number or 0th line, it will have no suffix. # perf probe -a kernel_read:0 Added new event: probe:kernel_read (on kernel_read) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 probe:kernel_read (on kernel_read@linux-5.0.0/fs/read_write.c) probe:kernel_read_L2 (on kernel_read:2@linux-5.0.0/fs/read_write.c) Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406474026.24476.2828897745502059569.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Do not show non representive lines by perf-probe -LMasami Hiramatsu
Since perf probe -L shows non representive lines, it can be mislead users where user can put probes. This prevents to show such non representive lines so that user can understand which lines user can probe. # perf probe -L kernel_read <kernel_read@/build/linux-pvZVvI/linux-5.0.0/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) { 2 mm_segment_t old_fs; ssize_t result; old_fs = get_fs(); 6 set_fs(get_ds()); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); Committer testing: Before: # perf probe -L kernel_read <kernel_read@/usr/src/debug/kernel-5.3.fc30/linux-5.3.8-200.fc30.x86_64/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) 1 { 2 mm_segment_t old_fs; 3 ssize_t result; 5 old_fs = get_fs(); 6 set_fs(KERNEL_DS); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); # See the 1, 3, 5 lines? They shouldn't be there, after this patch: # perf probe -L kernel_read <kernel_read@/usr/src/debug/kernel-5.3.fc30/linux-5.3.8-200.fc30.x86_64/fs/read_write.c:0> 0 ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos) { 2 mm_segment_t old_fs; ssize_t result; old_fs = get_fs(); 6 set_fs(KERNEL_DS); /* The cast to a user pointer is valid due to the set_fs() */ 8 result = vfs_read(file, (void __user *)buf, count, pos); 9 set_fs(old_fs); 10 return result; } EXPORT_SYMBOL(kernel_read); # Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406473064.24476.2913278267727587314.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Verify given line is a representive lineMasami Hiramatsu
Verify user given probe line is a representive line (which doesn't share the address with other lines or the line is the least line among the lines which shares same address), and if not, it shows what is the representive line. Without this fix, user can put a probe on the lines which is not a a representive line. But since this is not a representive line, perf probe -l shows a representive line number instead of user given line number. e.g. (put kernel_read:3, but listed as kernel_read:2) # perf probe -a kernel_read:3 Added new event: probe:kernel_read (on kernel_read:3) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 # perf probe -l probe:kernel_read (on kernel_read:2@linux-5.0.0/fs/read_write.c) With this fix, perf probe doesn't allow user to put a probe on a representive line, and tell what is the representive line. # perf probe -a kernel_read:3 This line is sharing the addrees with other lines. Please try to probe at kernel_read:2 instead. Error: Failed to add events. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406472071.24476.14915451439785001021.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf probe: Show correct statement line number by perf probe -lMasami Hiramatsu
The dwarf_getsrc_die() can return the line which is not a statement nor the least line number among the lines which shares same address. This can lead perf probe --list shows incorrect line number for probed address. To fix this, this introduces cu_getsrc_die() which returns only a statement line and which is the least line number (we call it the representive line for an address), and use it in cu_find_lineinfo(). Also, if the given address is the entry address of a real function, cu_find_lineinfo() returns the function declared line number instead of the start line number of the function body. For example, without this change perf probe -l shows incorrect line as below. # perf probe -a kernel_read:2 Added new event: probe:kernel_read (on kernel_read:2) You can now use it in all perf tools, such as: perf record -e probe:kernel_read -aR sleep 1 # perf probe -l probe:kernel_read (on kernel_read:1@linux-5.0.0/fs/read_write.c) With this fix, it shows correct line number as below; # perf probe -l probe:kernel_read (on kernel_read:2@linux-5.0.0/fs/read_write.c) Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lore.kernel.org/lkml/157406471067.24476.17463149618465494448.stgit@devnote2 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18x86/insn: Add some Intel instructions to the opcode mapAdrian Hunter
Add to the opcode map the following instructions: cldemote tpause umonitor umwait movdiri movdir64b enqcmd enqcmds encls enclu enclv pconfig wbnoinvd For information about the instructions, refer Intel SDM May 2019 (325462-070US) and Intel Architecture Instruction Set Extensions May 2019 (319433-037). The instruction decoding can be tested using the perf tools' "x86 instruction decoder - new instructions" test as folllows: $ perf test -v "new " 2>&1 | grep -i cldemote Decoded ok: 0f 1c 00 cldemote (%eax) Decoded ok: 0f 1c 05 78 56 34 12 cldemote 0x12345678 Decoded ok: 0f 1c 84 c8 78 56 34 12 cldemote 0x12345678(%eax,%ecx,8) Decoded ok: 0f 1c 00 cldemote (%rax) Decoded ok: 41 0f 1c 00 cldemote (%r8) Decoded ok: 0f 1c 04 25 78 56 34 12 cldemote 0x12345678 Decoded ok: 0f 1c 84 c8 78 56 34 12 cldemote 0x12345678(%rax,%rcx,8) Decoded ok: 41 0f 1c 84 c8 78 56 34 12 cldemote 0x12345678(%r8,%rcx,8) $ perf test -v "new " 2>&1 | grep -i tpause Decoded ok: 66 0f ae f3 tpause %ebx Decoded ok: 66 0f ae f3 tpause %ebx Decoded ok: 66 41 0f ae f0 tpause %r8d $ perf test -v "new " 2>&1 | grep -i umonitor Decoded ok: 67 f3 0f ae f0 umonitor %ax Decoded ok: f3 0f ae f0 umonitor %eax Decoded ok: 67 f3 0f ae f0 umonitor %eax Decoded ok: f3 0f ae f0 umonitor %rax Decoded ok: 67 f3 41 0f ae f0 umonitor %r8d $ perf test -v "new " 2>&1 | grep -i umwait Decoded ok: f2 0f ae f0 umwait %eax Decoded ok: f2 0f ae f0 umwait %eax Decoded ok: f2 41 0f ae f0 umwait %r8d $ perf test -v "new " 2>&1 | grep -i movdiri Decoded ok: 0f 38 f9 03 movdiri %eax,(%ebx) Decoded ok: 0f 38 f9 88 78 56 34 12 movdiri %ecx,0x12345678(%eax) Decoded ok: 48 0f 38 f9 03 movdiri %rax,(%rbx) Decoded ok: 48 0f 38 f9 88 78 56 34 12 movdiri %rcx,0x12345678(%rax) $ perf test -v "new " 2>&1 | grep -i movdir64b Decoded ok: 66 0f 38 f8 18 movdir64b (%eax),%ebx Decoded ok: 66 0f 38 f8 88 78 56 34 12 movdir64b 0x12345678(%eax),%ecx Decoded ok: 67 66 0f 38 f8 1c movdir64b (%si),%bx Decoded ok: 67 66 0f 38 f8 8c 34 12 movdir64b 0x1234(%si),%cx Decoded ok: 66 0f 38 f8 18 movdir64b (%rax),%rbx Decoded ok: 66 0f 38 f8 88 78 56 34 12 movdir64b 0x12345678(%rax),%rcx Decoded ok: 67 66 0f 38 f8 18 movdir64b (%eax),%ebx Decoded ok: 67 66 0f 38 f8 88 78 56 34 12 movdir64b 0x12345678(%eax),%ecx $ perf test -v "new " 2>&1 | grep -i enqcmd Decoded ok: f2 0f 38 f8 18 enqcmd (%eax),%ebx Decoded ok: f2 0f 38 f8 88 78 56 34 12 enqcmd 0x12345678(%eax),%ecx Decoded ok: 67 f2 0f 38 f8 1c enqcmd (%si),%bx Decoded ok: 67 f2 0f 38 f8 8c 34 12 enqcmd 0x1234(%si),%cx Decoded ok: f3 0f 38 f8 18 enqcmds (%eax),%ebx Decoded ok: f3 0f 38 f8 88 78 56 34 12 enqcmds 0x12345678(%eax),%ecx Decoded ok: 67 f3 0f 38 f8 1c enqcmds (%si),%bx Decoded ok: 67 f3 0f 38 f8 8c 34 12 enqcmds 0x1234(%si),%cx Decoded ok: f2 0f 38 f8 18 enqcmd (%rax),%rbx Decoded ok: f2 0f 38 f8 88 78 56 34 12 enqcmd 0x12345678(%rax),%rcx Decoded ok: 67 f2 0f 38 f8 18 enqcmd (%eax),%ebx Decoded ok: 67 f2 0f 38 f8 88 78 56 34 12 enqcmd 0x12345678(%eax),%ecx Decoded ok: f3 0f 38 f8 18 enqcmds (%rax),%rbx Decoded ok: f3 0f 38 f8 88 78 56 34 12 enqcmds 0x12345678(%rax),%rcx Decoded ok: 67 f3 0f 38 f8 18 enqcmds (%eax),%ebx Decoded ok: 67 f3 0f 38 f8 88 78 56 34 12 enqcmds 0x12345678(%eax),%ecx $ perf test -v "new " 2>&1 | grep -i enqcmds Decoded ok: f3 0f 38 f8 18 enqcmds (%eax),%ebx Decoded ok: f3 0f 38 f8 88 78 56 34 12 enqcmds 0x12345678(%eax),%ecx Decoded ok: 67 f3 0f 38 f8 1c enqcmds (%si),%bx Decoded ok: 67 f3 0f 38 f8 8c 34 12 enqcmds 0x1234(%si),%cx Decoded ok: f3 0f 38 f8 18 enqcmds (%rax),%rbx Decoded ok: f3 0f 38 f8 88 78 56 34 12 enqcmds 0x12345678(%rax),%rcx Decoded ok: 67 f3 0f 38 f8 18 enqcmds (%eax),%ebx Decoded ok: 67 f3 0f 38 f8 88 78 56 34 12 enqcmds 0x12345678(%eax),%ecx $ perf test -v "new " 2>&1 | grep -i encls Decoded ok: 0f 01 cf encls Decoded ok: 0f 01 cf encls $ perf test -v "new " 2>&1 | grep -i enclu Decoded ok: 0f 01 d7 enclu Decoded ok: 0f 01 d7 enclu $ perf test -v "new " 2>&1 | grep -i enclv Decoded ok: 0f 01 c0 enclv Decoded ok: 0f 01 c0 enclv $ perf test -v "new " 2>&1 | grep -i pconfig Decoded ok: 0f 01 c5 pconfig Decoded ok: 0f 01 c5 pconfig $ perf test -v "new " 2>&1 | grep -i wbnoinvd Decoded ok: f3 0f 09 wbnoinvd Decoded ok: f3 0f 09 wbnoinvd Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20191115135447.6519-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18x86/insn: perf tools: Add some instructions to the new instructions testAdrian Hunter
Add to the "x86 instruction decoder - new instructions" test the following instructions: cldemote tpause umonitor umwait movdiri movdir64b enqcmd enqcmds encls enclu enclv pconfig wbnoinvd For information about the instructions, refer Intel SDM May 2019 (325462-070US) and Intel Architecture Instruction Set Extensions May 2019 (319433-037). Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Link: http://lore.kernel.org/lkml/20191115135447.6519-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18selftests, bpf: Workaround an alu32 sub-register spilling issueYonghong Song
Currently, with latest llvm trunk, selftest test_progs failed obj file test_seg6_loop.o with the following error in verifier: infinite loop detected at insn 76 The byte code sequence looks like below, and noted that alu32 has been turned off by default for better generated codes in general: 48: w3 = 100 49: *(u32 *)(r10 - 68) = r3 ... ; if (tlv.type == SR6_TLV_PADDING) { 76: if w3 == 5 goto -18 <LBB0_19> ... 85: r1 = *(u32 *)(r10 - 68) ; for (int i = 0; i < 100; i++) { 86: w1 += -1 87: if w1 == 0 goto +5 <LBB0_20> 88: *(u32 *)(r10 - 68) = r1 The main reason for verification failure is due to partial spills at r10 - 68 for induction variable "i". Current verifier only handles spills with 8-byte values. The above 4-byte value spill to stack is treated to STACK_MISC and its content is not saved. For the above example: w3 = 100 R3_w=inv100 fp-64_w=inv1086626730498 *(u32 *)(r10 - 68) = r3 R3_w=inv100 fp-64_w=inv1086626730498 ... r1 = *(u32 *)(r10 - 68) R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) fp-64=inv1086626730498 To resolve this issue, verifier needs to be extended to track sub-registers in spilling, or llvm needs to enhanced to prevent sub-register spilling in register allocation phase. The former will increase verifier complexity and the latter will need some llvm "hacking". Let us workaround this issue by declaring the induction variable as "long" type so spilling will happen at non sub-register level. We can revisit this later if sub-register spilling causes similar or other verification issues. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191117214036.1309510-1-yhs@fb.com
2019-11-18selftests, bpf: Fix test_tc_tunnel hangingJiri Benc
When run_kselftests.sh is run, it hangs after test_tc_tunnel.sh. The reason is test_tc_tunnel.sh ensures the server ('nc -l') is run all the time, starting it again every time it is expected to terminate. The exception is the final client_connect: the server is not started anymore, which ensures no process is kept running after the test is finished. For a sit test, though, the script is terminated prematurely without the final client_connect and the 'nc' process keeps running. This in turn causes the run_one function in kselftest/runner.sh to hang forever, waiting for the runaway process to finish. Ensure a remaining server is terminated on cleanup. Fixes: f6ad6accaa99 ("selftests/bpf: expand test_tc_tunnel with SIT encap") Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/bpf/60919291657a9ee89c708d8aababc28ebe1420be.1573821780.git.jbenc@redhat.com
2019-11-18selftests, bpf: xdping is not meant to be run standaloneJiri Benc
The actual test to run is test_xdping.sh, which is already in TEST_PROGS. The xdping program alone is not runnable with 'make run_tests', it immediatelly fails due to missing arguments. Move xdping to TEST_GEN_PROGS_EXTENDED in order to be built but not run. Fixes: cd5385029f1d ("selftests/bpf: measure RTT from xdp using xdping") Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/4365c81198f62521344c2215909634407184387e.1573821726.git.jbenc@redhat.com
2019-11-18perf map: Move seldom used ->flags field to second cachelineArnaldo Carvalho de Melo
So we start with: $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned:1; /* 40: 0 1 */ _Bool priv:1; /* 40: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 44 4 */ u32 flags; /* 48 4 */ /* XXX 4 bytes hole, try to pack */ u64 pgoff; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u64 reloc; /* 64 8 */ u32 maj; /* 72 4 */ u32 min; /* 76 4 */ u64 ino; /* 80 8 */ u64 ino_generation; /* 88 8 */ u64 (*map_ip)(struct map *, u64); /* 96 8 */ u64 (*unmap_ip)(struct map *, u64); /* 104 8 */ struct dso * dso; /* 112 8 */ refcount_t refcnt; /* 120 4 */ /* size: 128, cachelines: 2, members: 17 */ /* sum members: 116, holes: 2, sum holes: 7 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* padding: 4 */ /* forced alignments: 1 */ } __attribute__((__aligned__(8))); $ and 'flags' is seldom used when printing details about the map or with the "cacheline" sort order, we can move them it to the second cacheline, that will allow combining it with 'refcnt', that is only four bytes: $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned:1; /* 40: 0 1 */ _Bool priv:1; /* 40: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 44 4 */ u64 pgoff; /* 48 8 */ u64 reloc; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u32 maj; /* 64 4 */ u32 min; /* 68 4 */ u64 ino; /* 72 8 */ u64 ino_generation; /* 80 8 */ u64 (*map_ip)(struct map *, u64); /* 88 8 */ u64 (*unmap_ip)(struct map *, u64); /* 96 8 */ struct dso * dso; /* 104 8 */ refcount_t refcnt; /* 112 4 */ u32 flags; /* 116 4 */ /* size: 120, cachelines: 2, members: 17 */ /* sum members: 116, holes: 1, sum holes: 3 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* forced alignments: 1 */ /* last cacheline: 56 bytes */ } __attribute__((__aligned__(8))); $ Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-2cdw3zlw1mkamaf7nqtdlxfi@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf map: Use bitmap for booleansArnaldo Carvalho de Melo
The map->priv and map->erange_warned are seldom used, the first only in tests/vmlinux-kallsyms.c, the later only when hist_entry__inc_addr_samples() returns -ERANGE in 'perf top', which are really rare occasions, so make them a bool bitfield. This will open up space for other members on the first cacheline. $ pahole -C map ~/bin/perf struct map { union { struct rb_node rb_node __attribute__((__aligned__(8))); /* 0 24 */ struct list_head node; /* 0 16 */ } __attribute__((__aligned__(8))); /* 0 24 */ u64 start; /* 24 8 */ u64 end; /* 32 8 */ _Bool erange_warned:1; /* 40: 0 1 */ _Bool priv:1; /* 40: 1 1 */ /* XXX 6 bits hole, try to pack */ /* XXX 3 bytes hole, try to pack */ u32 prot; /* 44 4 */ u32 flags; /* 48 4 */ /* XXX 4 bytes hole, try to pack */ u64 pgoff; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ u64 reloc; /* 64 8 */ u32 maj; /* 72 4 */ u32 min; /* 76 4 */ u64 ino; /* 80 8 */ u64 ino_generation; /* 88 8 */ u64 (*map_ip)(struct map *, u64); /* 96 8 */ u64 (*unmap_ip)(struct map *, u64); /* 104 8 */ struct dso * dso; /* 112 8 */ refcount_t refcnt; /* 120 4 */ /* size: 128, cachelines: 2, members: 17 */ /* sum members: 116, holes: 2, sum holes: 7 */ /* sum bitfield members: 2 bits, bit holes: 1, sum bit holes: 6 bits */ /* padding: 4 */ /* forced alignments: 1 */ } __attribute__((__aligned__(8))); $ Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-g5545pcq4ff0wr17tfb1piqt@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18drm/i915: Protect request peeking with RCUChris Wilson
Since the execlists_active() is no longer protected by the engine->active.lock, we need to protect the request pointer with RCU to prevent it being freed as we evaluate whether or not we need to preempt. Fixes: df403069029d ("drm/i915/execlists: Lift process_csb() out of the irq-off spinlock") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191104090158.2959-2-chris@chris-wilson.co.uk (cherry picked from commit 7d148635253328dda7cfe55d57e3c828e9564427) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (cherry picked from commit 8eb4704b124cbd44f189709959137d77063ecfa1) (cherry picked from commit 7e27238e149ce4f00d9cd801fe3aa0ea55e986a2) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-11-18btrfs: record all roots for rename exchange on a subvolJosef Bacik
Testing with the new fsstress support for subvolumes uncovered a pretty bad problem with rename exchange on subvolumes. We're modifying two different subvolumes, but we only start the transaction on one of them, so the other one is not added to the dirty root list. This is caught by btrfs_cow_block() with a warning because the root has not been updated, however if we do not modify this root again we'll end up pointing at an invalid root because the root item is never updated. Fix this by making sure we add the destination root to the trans list, the same as we do with normal renames. This fixes the corruption. Fixes: cdd1fedf8261 ("btrfs: add support for RENAME_EXCHANGE and RENAME_WHITEOUT") CC: stable@vger.kernel.org # 4.9+ Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18spi: mediatek: add SPI_CS_HIGH supportLuhua Xu
Change to use SPI_CS_HIGH to support spi CS polarity setting for chips support enhance_timing. Signed-off-by: Luhua Xu <luhua.xu@mediatek.com> Link: https://lore.kernel.org/r/1574053037-26721-2-git-send-email-luhua.xu@mediatek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-18drm/i915/userptr: Try to acquire the page lock around set_page_dirty()Chris Wilson
set_page_dirty says: For pages with a mapping this should be done under the page lock for the benefit of asynchronous memory errors who prefer a consistent dirty state. This rule can be broken in some special cases, but should be better not to. Under those rules, it is only safe for us to use the plain set_page_dirty calls for shmemfs/anonymous memory. Userptr may be used with real mappings and so needs to use the locked version (set_page_dirty_lock). However, following a try_to_unmap() we may want to remove the userptr and so call put_pages(). However, try_to_unmap() acquires the page lock and so we must avoid recursively locking the pages ourselves -- which means that we cannot safely acquire the lock around set_page_dirty(). Since we can't be sure of the lock, we have to risk skip dirtying the page, or else risk calling set_page_dirty() without a lock and so risk fs corruption. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203317 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112012 Fixes: 5cc9ed4b9a7a ("drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl") References: cb6d7c7dc7ff ("drm/i915/userptr: Acquire the page lock around set_page_dirty()") References: 505a8ec7e11a ("Revert "drm/i915/userptr: Acquire the page lock around set_page_dirty()"") References: 6dcc693bc57f ("ext4: warn when page is dirtied without buffers") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191111133205.11590-1-chris@chris-wilson.co.uk (cherry picked from commit 0d4bbe3d407f79438dc4f87943db21f7134cfc65) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (cherry picked from commit cee7fb437edcdb2f9f8affa959e274997f5dca4d) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-11-18drm/i915/pmu: "Frequency" is reported as accumulated cyclesChris Wilson
We report "frequencies" (actual-frequency, requested-frequency) as the number of accumulated cycles so that the average frequency over that period may be determined by the user. This means the units we report to the user are Mcycles (or just M), not MHz. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191109105356.5273-1-chris@chris-wilson.co.uk (cherry picked from commit e88866ef02851c88fe95a4bb97820b94b4d46f36) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (cherry picked from commit a7d87b70d6da96c6772e50728c8b4e78e4cbfd55) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-11-18drm/i915: Preload LUTs if the hw isn't currently using themVille Syrjälä
The LUTs are single buffered so in order to program them without tearing we'd have to do it during vblank (actually to be 100% effective it has to happen between start of vblank and frame start). We have no proper mechanism for that at the moment so we just defer loading them after the vblank waits have happened. That is not quite sufficient (especially when committing multiple pipes whose vblanks don't line up) so the LUT load will often leak into the following frame causing tearing. However in case the hardware wasn't previously using the LUT we can preload it before setting the enable bit (which is double buffered so won't tear). Let's determine if we can do such preloading and make it happen. Slight variation between the hardware requires some platforms specifics in the checks. Hans is seeing ugly colored flash on VLV/CHV macchines (GPD win and Asus T100HA) when the gamma LUT gets loaded for the first time as the BIOS has left some junk in the LUT memory. v2: Deal with uapi vs. hw crtc state split s/GCM/CGM/ typo fix Cc: Hans de Goede <hdegoede@redhat.com> Fixes: 051a6d8d3ca0 ("drm/i915: Move LUT programming to happen after vblank waits") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191030190815.7359-1-ville.syrjala@linux.intel.com Tested-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> (cherry picked from commit 0ccc42a2fd5107a7f58e62c8b35b61de9a70ce82) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (cherry picked from commit f77021372e2880237278e0ee57faadc077a8256a) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-11-18drm/i915: Don't oops in dumb_create ioctl if we have no crtcsVille Syrjälä
Make sure we have a crtc before probing its primary plane's max stride. Initially I thought we can't get this far without crtcs, but looks like we can via the dumb_create ioctl. Not sure if we shouldn't disable dumb buffer support entirely when we have no crtcs, but that would require some amount of work as the only thing currently being checked is dev->driver->dumb_create which we'd have to convert to some device specific dynamic thing. Cc: stable@vger.kernel.org Reported-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Fixes: aa5ca8b7421c ("drm/i915: Align dumb buffer stride to 4k to allow for gtt remapping") Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191106172349.11987-1-ville.syrjala@linux.intel.com Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (cherry picked from commit baea9ffe64200033499a4955f431e315bb807899) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (cherry picked from commit aeec766133f99d45aad60d650de50fb382104d95) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2019-11-18Btrfs: fix block group remaining RO forever after error during device replaceFilipe Manana
When doing a device replace, while at scrub.c:scrub_enumerate_chunks(), we set the block group to RO mode and then wait for any ongoing writes into extents of the block group to complete. While doing that wait we overwrite the value of the variable 'ret' and can break out of the loop if an error happens without turning the block group back into RW mode. So what happens is the following: 1) btrfs_inc_block_group_ro() returns 0, meaning it set the block group to RO mode (its ->ro field set to 1 or incremented to some value > 1); 2) Then btrfs_wait_ordered_roots() returns a value > 0; 3) Then if either joining or committing the transaction fails, we break out of the loop wihtout calling btrfs_dec_block_group_ro(), leaving the block group in RO mode forever. To fix this, just remove the code that waits for ongoing writes to extents of the block group, since it's not needed because in the initial setup phase of a device replace operation, before starting to find all chunks and their extents, we set the target device for replace while holding fs_info->dev_replace->rwsem, which ensures that after releasing that semaphore, any writes into the source device are made to the target device as well (__btrfs_map_block() guarantees that). So while at scrub_enumerate_chunks() we only need to worry about finding and copying extents (from the source device to the target device) that were written before we started the device replace operation. Fixes: f0e9b7d6401959 ("Btrfs: fix race setting block group readonly during device replace") Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: scrub: Don't check free space before marking a block group ROQu Wenruo
[BUG] When running btrfs/072 with only one online CPU, it has a pretty high chance to fail: btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg) - output mismatch (see xfstests-dev/results//btrfs/072.out.bad) --- tests/btrfs/072.out 2019-10-22 15:18:14.008965340 +0800 +++ /xfstests-dev/results//btrfs/072.out.bad 2019-11-14 15:56:45.877152240 +0800 @@ -1,2 +1,3 @@ QA output created by 072 Silence is golden +Scrub find errors in "-m dup -d single" test ... And with the following call trace: BTRFS info (device dm-5): scrub: started on devid 1 ------------[ cut here ]------------ BTRFS: Transaction aborted (error -27) WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs] CPU: 0 PID: 55087 Comm: btrfs Tainted: G W O 5.4.0-rc1-custom+ #13 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs] Call Trace: __btrfs_end_transaction+0xdb/0x310 [btrfs] btrfs_end_transaction+0x10/0x20 [btrfs] btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs] scrub_enumerate_chunks+0x264/0x940 [btrfs] btrfs_scrub_dev+0x45c/0x8f0 [btrfs] btrfs_ioctl+0x31a1/0x3fb0 [btrfs] do_vfs_ioctl+0x636/0xaa0 ksys_ioctl+0x67/0x90 __x64_sys_ioctl+0x43/0x50 do_syscall_64+0x79/0xe0 entry_SYSCALL_64_after_hwframe+0x49/0xbe ---[ end trace 166c865cec7688e7 ]--- [CAUSE] The error number -27 is -EFBIG, returned from the following call chain: btrfs_end_transaction() |- __btrfs_end_transaction() |- btrfs_create_pending_block_groups() |- btrfs_finish_chunk_alloc() |- btrfs_add_system_chunk() This happens because we have used up all space of btrfs_super_block::sys_chunk_array. The root cause is, we have the following bad loop of creating tons of system chunks: 1. The only SYSTEM chunk is being scrubbed It's very common to have only one SYSTEM chunk. 2. New SYSTEM bg will be allocated As btrfs_inc_block_group_ro() will check if we have enough space after marking current bg RO. If not, then allocate a new chunk. 3. New SYSTEM bg is still empty, will be reclaimed During the reclaim, we will mark it RO again. 4. That newly allocated empty SYSTEM bg get scrubbed We go back to step 2, as the bg is already mark RO but still not cleaned up yet. If the cleaner kthread doesn't get executed fast enough (e.g. only one CPU), then we will get more and more empty SYSTEM chunks, using up all the space of btrfs_super_block::sys_chunk_array. [FIX] Since scrub/dev-replace doesn't always need to allocate new extent, especially chunk tree extent, so we don't really need to do chunk pre-allocation. To break above spiral, here we introduce a new parameter to btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we need extra chunk pre-allocation. For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass @do_chunk_alloc=false. This should keep unnecessary empty chunks from popping up for scrub. Also, since there are two parameters for btrfs_inc_block_group_ro(), add more comment for it. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: change btrfs_fs_devices::rotating to boolJohannes Thumshirn
struct btrfs_fs_devices::rotating currently is declared as an integer variable but only used as a boolean. Change the variable definition to bool and update to code touching it to set 'true' and 'false'. Reviewed-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: change btrfs_fs_devices::seeding to boolJohannes Thumshirn
struct btrfs_fs_devices::seeding currently is declared as an integer variable but only used as a boolean. Change the variable definition to bool and update to code touching it to set 'true' and 'false'. Reviewed-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: rename btrfs_block_group_cacheDavid Sterba
The type name is misleading, a single entry is named 'cache' while this normally means a collection of objects. Rename that everywhere. Also the identifier was quite long, making function prototypes harder to format. Suggested-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: block-group: Reuse the item key from caller of read_one_block_group()Qu Wenruo
For read_one_block_group(), its only caller has already got the item key to search next block group item. So we can use that key directly without doing our own convertion on stack. Also, since that key used in btrfs_read_block_groups() is vital for block group item search, add 'const' keyword for that parameter to prevent read_one_block_group() to modify it. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: block-group: Refactor btrfs_read_block_groups()Qu Wenruo
Refactor the work inside the loop of btrfs_read_block_groups() into one separate function, read_one_block_group(). This allows read_one_block_group to be reused for later BG_TREE feature. The refactor does the following extra fix: - Use btrfs_fs_incompat() to replace open-coded feature check Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: document extent buffer lockingDavid Sterba
Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: access eb::blocking_writers according to ACCESS_ONCE policiesDavid Sterba
A nice writeup of the LKMM (Linux Kernel Memory Model) rules for access once policies can be found here https://lwn.net/Articles/799218/#Access-Marking%20Policies . The locked and unlocked access to eb::blocking_writers should be annotated accordingly, following this: Writes: - locked write must use ONCE, may use plain read - unlocked write must use ONCE Reads: - unlocked read must use ONCE - locked read may use plain read iff not mixed with unlocked read - unlocked read then locked must use ONCE There's one difference on the assembly level, where btrfs_tree_read_lock_atomic and btrfs_try_tree_read_lock used the cached value and did not reevaluate it after taking the lock. This could have missed some opportunities to take the lock in case blocking writers changed between the calls, but the window is just a few instructions long. As this is in try-lock, the callers handle that. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: set blocking_writers directly, no increment or decrementDavid Sterba
The increment and decrement was inherited from previous version that used atomics, switched in commit 06297d8cefca ("btrfs: switch extent_buffer blocking_writers from atomic to int"). The only possible values are 0 and 1 so we can set them directly. The generated assembly (gcc 9.x) did the direct value assignment in btrfs_set_lock_blocking_write (asm diff after change in 06297d8cefca): 5d: test %eax,%eax 5f: je 62 <btrfs_set_lock_blocking_write+0x22> 61: retq - 62: lock incl 0x44(%rdi) - 66: add $0x50,%rdi - 6a: jmpq 6f <btrfs_set_lock_blocking_write+0x2f> + 62: movl $0x1,0x44(%rdi) + 69: add $0x50,%rdi + 6d: jmpq 72 <btrfs_set_lock_blocking_write+0x32> The part in btrfs_tree_unlock did a decrement because BUG_ON(blockers > 1) is probably not a strong hint for the compiler, but otherwise the output looks safe: - lock decl 0x44(%rdi) + sub $0x1,%eax + mov %eax,0x44(%rdi) Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: merge blocking_writers branches in btrfs_tree_read_lockDavid Sterba
There are two ifs that use eb::blocking_writers. As this is a variable modified inside and outside of locks, we could minimize number of accesses to avoid problems with getting different results at different times. The access here is locked so this can only race with btrfs_tree_unlock that sets blocking_writers to 0 without lock and unsets the lock owner. The first branch is taken only if the same thread already holds the lock, the second if checks for blocking writers. Here we'd either unlock and wait, or proceed. Both are valid states of the locking protocol. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: drop incompat bit for raid1c34 after last block group is goneDavid Sterba
When there are no raid1c3 or raid1c4 block groups left after balance (either convert or with other filters applied), remove the incompat bit. This is already done for RAID56, do the same for RAID1C34. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add incompat for raid1 with 3, 4 copiesDavid Sterba
The new raid1c3 and raid1c4 profiles are backward incompatible and the name shall be 'raid1c34', the status can be found in the global supported features in /sys/fs/btrfs/features or in the per-filesystem directory. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add support for 4-copy replication (raid1c4)David Sterba
Add new block group profile to store 4 copies in a simliar way that current RAID1 does. The profile attributes and constraints are defined in the raid table and used by the same code that already handles the 2- and 3-copy RAID1. The minimum number of devices is 4, the maximum number of devices/chunks that can be lost/damaged is 3. There is no comparable traditional RAID level, the profile is added for future needs to accompany triple-parity and beyond. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add support for 3-copy replication (raid1c3)David Sterba
Add new block group profile to store 3 copies in a simliar way that current RAID1 does. The profile attributes and constraints are defined in the raid table and used by the same code that already handles the 2-copy RAID1. The minimum number of devices is 3, the maximum number of devices/chunks that can be lost/damaged is 2. Like RAID6 but with 33% space utilization. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: sink write flags to cow_file_range_asyncDavid Sterba
In commit "Btrfs: use REQ_CGROUP_PUNT for worker thread submitted bios", cow_file_range_async gained wbc as a parameter and this makes passing write flags redundant. Set it inside the function and remove the parameter. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: sink write_flags to __extent_writepage_ioDavid Sterba
__extent_writepage reads write flags from wbc and passes both to __extent_writepage_io. This makes write_flags redundant and we can remove it. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18Btrfs: send, skip backreference walking for extents with many referencesFilipe Manana
Backreference walking, which is used by send to figure if it can issue clone operations instead of write operations, can be very slow and use too much memory when extents have many references. This change simply skips backreference walking when an extent has more than 64 references, in which case we fallback to a write operation instead of a clone operation. This limit is conservative and in practice I observed no signicant slowdown with up to 100 references and still low memory usage up to that limit. This is a temporary workaround until there are speedups in the backref walking code, and as such it does not attempt to add extra interfaces or knobs to tweak the threshold. Reported-by: Atemu <atemu.main@gmail.com> Link: https://lore.kernel.org/linux-btrfs/CAE4GHgkvqVADtS4AzcQJxo0Q1jKQgKaW3JGp3SGdoinVo=C9eQ@mail.gmail.com/T/#me55dc0987f9cc2acaa54372ce0492c65782be3fa CC: stable@vger.kernel.org # 4.4+ Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18Btrfs: send, allow clone operations within the same fileFilipe Manana
For send we currently skip clone operations when the source and destination files are the same. This is so because clone didn't support this case in its early days, but support for it was added back in May 2013 by commit a96fbc72884fcb ("Btrfs: allow file data clone within a file"). This change adds support for it. Example: $ mkfs.btrfs -f /dev/sdd $ mount /dev/sdd /mnt/sdd $ xfs_io -f -c "pwrite -S 0xab -b 64K 0 64K" /mnt/sdd/foobar $ xfs_io -c "reflink /mnt/sdd/foobar 0 64K 64K" /mnt/sdd/foobar $ btrfs subvolume snapshot -r /mnt/sdd /mnt/sdd/snap $ mkfs.btrfs -f /dev/sde $ mount /dev/sde /mnt/sde $ btrfs send /mnt/sdd/snap | btrfs receive /mnt/sde Without this change file foobar at the destination has a single 128Kb extent: $ filefrag -v /mnt/sde/snap/foobar Filesystem type is: 9123683e File size of /mnt/sde/snap/foobar is 131072 (32 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 31: 0.. 31: 32: last,unknown_loc,delalloc,eof /mnt/sde/snap/foobar: 1 extent found With this we get a single 64Kb extent that is shared at file offsets 0 and 64K, just like in the source filesystem: $ filefrag -v /mnt/sde/snap/foobar Filesystem type is: 9123683e File size of /mnt/sde/snap/foobar is 131072 (32 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 15: 3328.. 3343: 16: shared 1: 16.. 31: 3328.. 3343: 16: 3344: last,shared,eof /mnt/sde/snap/foobar: 2 extents found Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Ensure we trim ranges across block group boundaryQu Wenruo
[BUG] When deleting large files (which cross block group boundary) with discard mount option, we find some btrfs_discard_extent() calls only trimmed part of its space, not the whole range: btrfs_discard_extent: type=0x1 start=19626196992 len=2144530432 trimmed=1073741824 ratio=50% type: bbio->map_type, in above case, it's SINGLE DATA. start: Logical address of this trim len: Logical length of this trim trimmed: Physically trimmed bytes ratio: trimmed / len Thus leaving some unused space not discarded. [CAUSE] When discard mount option is specified, after a transaction is fully committed (super block written to disk), we begin to cleanup pinned extents in the following call chain: btrfs_commit_transaction() |- btrfs_finish_extent_commit() |- find_first_extent_bit(unpin, 0, &start, &end, EXTENT_DIRTY); |- btrfs_discard_extent() However, pinned extents are recorded in an extent_io_tree, which can merge adjacent extent states. When a large file gets deleted and it has adjacent file extents across block group boundary, we will get a large merged range like this: |<--- BG1 --->|<--- BG2 --->| |//////|<-- Range to discard --->|/////| To discard that range, we have the following calls: btrfs_discard_extent() |- btrfs_map_block() | Returned bbio will end at BG1's end. As btrfs_map_block() | never returns result across block group boundary. |- btrfs_issuse_discard() Issue discard for each stripe. So we will only discard the range in BG1, not the remaining part in BG2. Furthermore, this bug is not that reliably observed, for above case, if there is no other extent in BG2, BG2 will be empty and btrfs will trim all space of BG2, covering up the bug. [FIX] - Allow __btrfs_map_block_for_discard() to modify @length parameter btrfs_map_block() uses its @length paramter to notify the caller how many bytes are mapped in current call. With __btrfs_map_block_for_discard() also modifing the @length, btrfs_discard_extent() now understands when to do extra trim. - Call btrfs_map_block() in a loop until we hit the range end Since we now know how many bytes are mapped each time, we can iterate through each block group boundary and issue correct trim for each range. Reviewed-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Tested-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: volumes: Use more straightforward way to calculate map lengthQu Wenruo
The old code goes: offset = logical - em->start; length = min_t(u64, em->len - offset, length); Where @length calculation is dependent on offset, it can take reader several more seconds to find it's just the same code as: offset = logical - em->start; length = min_t(u64, em->start + em->len - logical, length); Use above code to make the length calculate independent from other variable, thus slightly increase the readability. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: tree-checker: Check item size before reading file extent typeQu Wenruo
In check_extent_data_item(), we read file extent type without verifying if the item size is valid. Add such check to ensure the file extent type we read is correct. The check is not as accurate as we need to cover both inline and regular extents, so it only checks if the item size is larger or equal to inline header. So the existing size checks on inline/regular extents are still needed. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: clean up locking name in scrub_enumerate_chunks()Dan Carpenter
The "&fs_info->dev_replace.rwsem" and "&dev_replace->rwsem" refer to the same lock but Smatch is not clever enough to figure that out so it leads to static checker warnings. It's better to use it consistently anyway. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Streamline btrfs_fs_info::backup_root_index semanticsNikolay Borisov
The backup_root_index member stores the index at which the backup root should be saved upon next transaction commit. However, there is a small deviation from this behavior in the form of a check in backup_super_roots which checks if current root generation equals to the generation of the previous root. This can trigger in the following scenario: slot0: gen-2 slot1: gen-1 slot2: gen slot3: unused Now suppose slot3 (which is also the root specified in the super block) is corrupted hence init_tree_roots chooses to use the backup root at slot2, meaning read_backup_root will read slot2 and assign the superblock generation to gen-1. Despite this backup_root_index will point at slot3 because its init happens in init_backup_root_slot, long before any parsing of the backup roots occur. Then on next transaction start, gen-1 will be incremented by 1 making the root's generation equal gen. Subsequently, on transaction commit the following check triggers: if (btrfs_backup_tree_root_gen(root_backup) == btrfs_header_generation(info->tree_root->node)) This causes the 'next_backup', which is the index at which the backup is going to be written to, to set to last_backup, which will be slot2. All of this is a very confusing way of expressing the following invariant: Always write a backup root at the index following the last used backup root. This commit streamlines this logic by setting backup_root_index to the next index after the one used for mount. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Rename find_oldest_super_backup to init_backup_root_slotNikolay Borisov
The old name name was an awful misnomer because it didn't really find the oldest super backup per-se but rather its slot. For example if we have: slot0: gen - 2 slot1: gen - 1 slot2: gen slot3: empty init_backup_root_slot will return slot3 and not slot0. The new name is more appropriate since the function doesn't care whether there is a valid backup in the returned slot or not. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Remove unused next_root_backup functionNikolay Borisov
This function has been superseded by previous commits and is no longer used so just remove it. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Don't use objectid_mutex during mountNikolay Borisov
Since the filesystem is not well formed and no trees are loaded it's pointless holding the objectid_mutex. Just remove its usage. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Factor out tree roots initialization during mountNikolay Borisov
The code responsible for reading and initializing tree roots is scattered in open_ctree among 2 labels, emulating a loop. This is rather confusing to reason about. Instead, factor the code to a new function, init_tree_roots which implements the same logical flow. There are a couple of notable differences, namely: * Instead of using next_backup_root it's using the newly introduced read_backup_root. * If read_backup_root returns an error init_tree_roots propagates the error and there is no special handling of that case e.g. the code jumps straight to 'fail_tree_roots' label. The old code, however, was (erroneously) jumping to 'fail_block_groups' label if next_backup_root did fail, this was unnecessary since the tree roots init logic doesn't modify the state of block groups. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Add read_backup_rootNikolay Borisov
This function will replace next_root_backup with a much saner/cleaner interface. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Remove newest_gen argument from find_oldest_super_backupNikolay Borisov
It's no longer needed following cleanups around find_newest_backup_root Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Cleanup and simplify find_newest_super_backupNikolay Borisov
Backup roots are always written in a circular manner. By definition we can only ever have 1 backup root whose generation equals to that of the superblock. Hence, the 'if' in the for loop will trigger at most once. This is sufficient to return the newest backup root. Furthermore the newest_gen parameter is always set to the generation of the superblock. This value can be obtained from the fs_info. This patch removes the unnecessary code dealing with the wraparound case and makes 'newest_gen' a local variable. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18Btrfs: remove unnecessary delalloc mutex for inodesFilipe Manana
The inode delalloc mutex was added a long time ago by commit f248679e86fea ("Btrfs: add a delalloc mutex to inodes for delalloc reservations"), and the reason for its introduction is not very clear from the change log. It claims it solves bogus warnings from lockdep, however it lacks an example report/warning from lockdep, or any explanation. Since we have enough concurrentcy protection from the locks of the space info and block reserve objects, and such lockdep warnings don't seem to exist anymore (at least on a 5.3 kernel I couldn't get them with fstests, ltp, fs_mark, etc), remove it, simplifying things a bit and decreasing the size of the btrfs_inode structure. With some quick fio tests doing direct IO and mmap writes I couldn't observe any significant performance increase either (direct IO writes that don't increase the file's size don't hold the inode's lock for their entire duration and mmap writes don't hold the inode's lock at all), which are the only type of writes that could see any performance gain due to less serialization. Review feedback from Josef: The problem was taking the i_mutex in mmap, which is how I was protecting delalloc reservations originally. The delalloc mutex didn't come with all of the other dependencies. That's what the lockdep messages were about, removing the lock isn't going to make them appear again. We _had_ to lock around this because we used to do tricks to keep from over-reserving, and if we didn't serialize delalloc reservations we'd end up with ugly accounting problems when we tried to clean things up. However with my recentish changes this isn't the case anymore. Every operation is responsible for reserving its space, and then adding it to the inode. Then cleaning up is straightforward and can't be mucked up by other users. So we no longer need the delalloc mutex to safe us from ourselves. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>