summaryrefslogtreecommitdiff
path: root/Documentation/RCU/torture.txt
blob: af712a3c5b6a56f4e311649037c1e78b8f935b06 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
RCU Torture Test Operation


CONFIG_RCU_TORTURE_TEST

The CONFIG_RCU_TORTURE_TEST config option is available for all RCU
implementations.  It creates an rcutorture kernel module that can
be loaded to run a torture test.  The test periodically outputs
status messages via printk(), which can be examined via the dmesg
command (perhaps grepping for "torture").  The test is started
when the module is loaded, and stops when the module is unloaded.

Module parameters are prefixed by "rcutorture." in
Documentation/admin-guide/kernel-parameters.txt.

OUTPUT

The statistics output is as follows:

	rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
	rcu-torture: rtc:           (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767
	rcu-torture: Reader Pipe:  727860534 34213 0 0 0 0 0 0 0 0 0
	rcu-torture: Reader Batch:  727877838 17003 0 0 0 0 0 0 0 0 0
	rcu-torture: Free-Block Circulation:  155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0
	rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4

The command "dmesg | grep torture:" will extract this information on
most systems.  On more esoteric configurations, it may be necessary to
use other commands to access the output of the printk()s used by
the RCU torture test.  The printk()s use KERN_ALERT, so they should
be evident.  ;-)

The first and last lines show the rcutorture module parameters, and the
last line shows either "SUCCESS" or "FAILURE", based on rcutorture's
automatic determination as to whether RCU operated correctly.

The entries are as follows:

o	"rtc": The hexadecimal address of the structure currently visible
	to readers.

o	"ver": The number of times since boot that the RCU writer task
	has changed the structure visible to readers.

o	"tfle": If non-zero, indicates that the "torture freelist"
	containing structures to be placed into the "rtc" area is empty.
	This condition is important, since it can fool you into thinking
	that RCU is working when it is not.  :-/

o	"rta": Number of structures allocated from the torture freelist.

o	"rtaf": Number of allocations from the torture freelist that have
	failed due to the list being empty.  It is not unusual for this
	to be non-zero, but it is bad for it to be a large fraction of
	the value indicated by "rta".

o	"rtf": Number of frees into the torture freelist.

o	"rtmbe": A non-zero value indicates that rcutorture believes that
	rcu_assign_pointer() and rcu_dereference() are not working
	correctly.  This value should be zero.

o	"rtbe": A non-zero value indicates that one of the rcu_barrier()
	family of functions is not working correctly.

o	"rtbke": rcutorture was unable to create the real-time kthreads
	used to force RCU priority inversion.  This value should be zero.

o	"rtbre": Although rcutorture successfully created the kthreads
	used to force RCU priority inversion, it was unable to set them
	to the real-time priority level of 1.  This value should be zero.

o	"rtbf": The number of times that RCU priority boosting failed
	to resolve RCU priority inversion.

o	"rtb": The number of times that rcutorture attempted to force
	an RCU priority inversion condition.  If you are testing RCU
	priority boosting via the "test_boost" module parameter, this
	value should be non-zero.

o	"nt": The number of times rcutorture ran RCU read-side code from
	within a timer handler.  This value should be non-zero only
	if you specified the "irqreader" module parameter.

o	"Reader Pipe": Histogram of "ages" of structures seen by readers.
	If any entries past the first two are non-zero, RCU is broken.
	And rcutorture prints the error flag string "!!!" to make sure
	you notice.  The age of a newly allocated structure is zero,
	it becomes one when removed from reader visibility, and is
	incremented once per grace period subsequently -- and is freed
	after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods.

	The output displayed above was taken from a correctly working
	RCU.  If you want to see what it looks like when broken, break
	it yourself.  ;-)

o	"Reader Batch": Another histogram of "ages" of structures seen
	by readers, but in terms of counter flips (or batches) rather
	than in terms of grace periods.  The legal number of non-zero
	entries is again two.  The reason for this separate view is that
	it is sometimes easier to get the third entry to show up in the
	"Reader Batch" list than in the "Reader Pipe" list.

o	"Free-Block Circulation": Shows the number of torture structures
	that have reached a given point in the pipeline.  The first element
	should closely correspond to the number of structures allocated,
	the second to the number that have been removed from reader view,
	and all but the last remaining to the corresponding number of
	passes through a grace period.  The last entry should be zero,
	as it is only incremented if a torture structure's counter
	somehow gets incremented farther than it should.

Different implementations of RCU can provide implementation-specific
additional information.  For example, Tree SRCU provides the following
additional line:

	srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-10,14) 7(-14,11) T(1,6)

This line shows the per-CPU counter state, in this case for Tree SRCU
using a dynamically allocated srcu_struct (hence "srcud-" rather than
"srcu-").  The numbers in parentheses are the values of the "old" and
"current" counters for the corresponding CPU.  The "idx" value maps the
"old" and "current" values to the underlying array, and is useful for
debugging.  The final "T" entry contains the totals of the counters.


USAGE ON SPECIFIC KERNEL BUILDS

It is sometimes desirable to torture RCU on a specific kernel build,
for example, when preparing to put that kernel build into production.
In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m
so that the test can be started using modprobe and terminated using rmmod.

For example, the following script may be used to torture RCU:

	#!/bin/sh

	modprobe rcutorture
	sleep 3600
	rmmod rcutorture
	dmesg | grep torture:

The output can be manually inspected for the error flag of "!!!".
One could of course create a more elaborate script that automatically
checked for such errors.  The "rmmod" command forces a "SUCCESS",
"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed.  The first
two are self-explanatory, while the last indicates that while there
were no RCU failures, CPU-hotplug problems were detected.


USAGE ON MAINLINE KERNELS

When using rcutorture to test changes to RCU itself, it is often
necessary to build a number of kernels in order to test that change
across a broad range of combinations of the relevant Kconfig options
and of the relevant kernel boot parameters.  In this situation, use
of modprobe and rmmod can be quite time-consuming and error-prone.

Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh
script is available for mainline testing for x86, arm64, and
powerpc.  By default, it will run the series of tests specified by
tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test
running for 30 minutes within a guest OS using a minimal userspace
supplied by an automatically generated initrd.  After the tests are
complete, the resulting build products and console output are analyzed
for errors and the results of the runs are summarized.

On larger systems, rcutorture testing can be accelerated by passing the
--cpus argument to kvm.sh.  For example, on a 64-CPU system, "--cpus 43"
would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
complete all the scenarios in two batches, reducing the time to complete
from about eight hours to about one hour (not counting the time to build
the sixteen kernels).  The "--dryrun sched" argument will not run tests,
but rather tell you how the tests would be scheduled into batches.  This
can be useful when working out how many CPUs to specify in the --cpus
argument.

Not all changes require that all scenarios be run.  For example, a change
to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
--configs argument to kvm.sh as follows:  "--configs 'SRCU-N SRCU-P'".
Large systems can run multiple copies of of the full set of scenarios,
for example, a system with 448 hardware threads can run five instances
of the full set concurrently.  To make this happen:

	kvm.sh --cpus 448 --configs '5*CFLIST'

Alternatively, such a system can run 56 concurrent instances of a single
eight-CPU scenario:

	kvm.sh --cpus 448 --configs '56*TREE04'

Or 28 concurrent instances of each of two eight-CPU scenarios:

	kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'

Of course, each concurrent instance will use memory, which can be
limited using the --memory argument, which defaults to 512M.  Small
values for memory may require disabling the callback-flooding tests
using the --bootargs parameter discussed below.

Sometimes additional debugging is useful, and in such cases the --kconfig
parameter to kvm.sh may be used, for example, "--kconfig 'CONFIG_KASAN=y'".

Kernel boot arguments can also be supplied, for example, to control
rcutorture's module parameters.  For example, to test a change to RCU's
CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
This will of course result in the scripting reporting a failure, namely
the resuling RCU CPU stall warning.  As noted above, reducing memory may
require disabling rcutorture's callback-flooding tests:

	kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
		--bootargs 'rcutorture.fwd_progress=0'

Sometimes all that is needed is a full set of kernel builds.  This is
what the --buildonly argument does.

Finally, the --trust-make argument allows each kernel build to reuse what
it can from the previous kernel build.

There are additional more arcane arguments that are documented in the
source code of the kvm.sh script.

If a run contains failures, the number of buildtime and runtime failures
is listed at the end of the kvm.sh output, which you really should redirect
to a file.  The build products and console output of each run is kept in
tools/testing/selftests/rcutorture/res in timestamped directories.  A
given directory can be supplied to kvm-find-errors.sh in order to have
it cycle you through summaries of errors and full error logs.  For example:

	tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
		tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23

However, it is often more convenient to access the files directly.
Files pertaining to all scenarios in a run reside in the top-level
directory (2020.01.20-15.54.23 in the example above), while per-scenario
files reside in a subdirectory named after the scenario (for example,
"TREE04").  If a given scenario ran more than once (as in "--configs
'56*TREE04'" above), the directories corresponding to the second and
subsequent runs of that scenario include a sequence number, for example,
"TREE04.2", "TREE04.3", and so on.

The most frequently used file in the top-level directory is testid.txt.
If the test ran in a git repository, then this file contains the commit
that was tested and any uncommitted changes in diff format.

The most frequently used files in each per-scenario-run directory are:

.config: This file contains the Kconfig options.

Make.out: This contains build output for a specific scenario.

console.log: This contains the console output for a specific scenario.
	This file may be examined once the kernel has booted, but
	it might not exist if the build failed.

vmlinux: This contains the kernel, which can be useful with tools like
	objdump and gdb.

A number of additional files are available, but are less frequently used.
Many are intended for debugging of rcutorture itself or of its scripting.

As of v5.4, a successful run with the default set of scenarios produces
the following summary at the end of the run on a 12-CPU system:

SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
CPU count limited from 16 to 12
TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
CPU count limited from 16 to 12
TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011