summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-11-18btrfs: Streamline btrfs_fs_info::backup_root_index semanticsNikolay Borisov
The backup_root_index member stores the index at which the backup root should be saved upon next transaction commit. However, there is a small deviation from this behavior in the form of a check in backup_super_roots which checks if current root generation equals to the generation of the previous root. This can trigger in the following scenario: slot0: gen-2 slot1: gen-1 slot2: gen slot3: unused Now suppose slot3 (which is also the root specified in the super block) is corrupted hence init_tree_roots chooses to use the backup root at slot2, meaning read_backup_root will read slot2 and assign the superblock generation to gen-1. Despite this backup_root_index will point at slot3 because its init happens in init_backup_root_slot, long before any parsing of the backup roots occur. Then on next transaction start, gen-1 will be incremented by 1 making the root's generation equal gen. Subsequently, on transaction commit the following check triggers: if (btrfs_backup_tree_root_gen(root_backup) == btrfs_header_generation(info->tree_root->node)) This causes the 'next_backup', which is the index at which the backup is going to be written to, to set to last_backup, which will be slot2. All of this is a very confusing way of expressing the following invariant: Always write a backup root at the index following the last used backup root. This commit streamlines this logic by setting backup_root_index to the next index after the one used for mount. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Rename find_oldest_super_backup to init_backup_root_slotNikolay Borisov
The old name name was an awful misnomer because it didn't really find the oldest super backup per-se but rather its slot. For example if we have: slot0: gen - 2 slot1: gen - 1 slot2: gen slot3: empty init_backup_root_slot will return slot3 and not slot0. The new name is more appropriate since the function doesn't care whether there is a valid backup in the returned slot or not. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Remove unused next_root_backup functionNikolay Borisov
This function has been superseded by previous commits and is no longer used so just remove it. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Don't use objectid_mutex during mountNikolay Borisov
Since the filesystem is not well formed and no trees are loaded it's pointless holding the objectid_mutex. Just remove its usage. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Factor out tree roots initialization during mountNikolay Borisov
The code responsible for reading and initializing tree roots is scattered in open_ctree among 2 labels, emulating a loop. This is rather confusing to reason about. Instead, factor the code to a new function, init_tree_roots which implements the same logical flow. There are a couple of notable differences, namely: * Instead of using next_backup_root it's using the newly introduced read_backup_root. * If read_backup_root returns an error init_tree_roots propagates the error and there is no special handling of that case e.g. the code jumps straight to 'fail_tree_roots' label. The old code, however, was (erroneously) jumping to 'fail_block_groups' label if next_backup_root did fail, this was unnecessary since the tree roots init logic doesn't modify the state of block groups. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Add read_backup_rootNikolay Borisov
This function will replace next_root_backup with a much saner/cleaner interface. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Remove newest_gen argument from find_oldest_super_backupNikolay Borisov
It's no longer needed following cleanups around find_newest_backup_root Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Cleanup and simplify find_newest_super_backupNikolay Borisov
Backup roots are always written in a circular manner. By definition we can only ever have 1 backup root whose generation equals to that of the superblock. Hence, the 'if' in the for loop will trigger at most once. This is sufficient to return the newest backup root. Furthermore the newest_gen parameter is always set to the generation of the superblock. This value can be obtained from the fs_info. This patch removes the unnecessary code dealing with the wraparound case and makes 'newest_gen' a local variable. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18Btrfs: remove unnecessary delalloc mutex for inodesFilipe Manana
The inode delalloc mutex was added a long time ago by commit f248679e86fea ("Btrfs: add a delalloc mutex to inodes for delalloc reservations"), and the reason for its introduction is not very clear from the change log. It claims it solves bogus warnings from lockdep, however it lacks an example report/warning from lockdep, or any explanation. Since we have enough concurrentcy protection from the locks of the space info and block reserve objects, and such lockdep warnings don't seem to exist anymore (at least on a 5.3 kernel I couldn't get them with fstests, ltp, fs_mark, etc), remove it, simplifying things a bit and decreasing the size of the btrfs_inode structure. With some quick fio tests doing direct IO and mmap writes I couldn't observe any significant performance increase either (direct IO writes that don't increase the file's size don't hold the inode's lock for their entire duration and mmap writes don't hold the inode's lock at all), which are the only type of writes that could see any performance gain due to less serialization. Review feedback from Josef: The problem was taking the i_mutex in mmap, which is how I was protecting delalloc reservations originally. The delalloc mutex didn't come with all of the other dependencies. That's what the lockdep messages were about, removing the lock isn't going to make them appear again. We _had_ to lock around this because we used to do tricks to keep from over-reserving, and if we didn't serialize delalloc reservations we'd end up with ugly accounting problems when we tried to clean things up. However with my recentish changes this isn't the case anymore. Every operation is responsible for reserving its space, and then adding it to the inode. Then cleaning up is straightforward and can't be mucked up by other users. So we no longer need the delalloc mutex to safe us from ourselves. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18Btrfs: remove wait queue from space_info structureFilipe Manana
It is not used anymore since commit 957780eb2788d8 ("Btrfs: introduce ticketed enospc infrastructure"), so just remove it. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: remove cached space_info in btrfs_statfs()Johannes Thumshirn
In btrfs_statfs() we cache fs_info::space_info in a local variable only to use it once in a list_for_each_rcu() statement. Not only is the local variable unnecessary it even makes the code harder to follow as it's not clear which list it is iterating. Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add dedicated members for start and length of a block groupDavid Sterba
The on-disk format of block group item makes use of the key that stores the offset and length. This is further used in the code, although this makes thing harder to understand. The key is also packed so the offset/length is not properly aligned as u64. Add start (key.objectid) and length (key.offset) members to block group and remove the embedded key. When the item is searched or written, a local variable for key is used. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: rename extent buffer block group item accessorsDavid Sterba
Accessors defined by BTRFS_SETGET_FUNCS take a raw extent buffer and manipulate the items there, there's no special prefix required. The block group accessors had _disk_ because previously the names were occupied by the on-stack accessors. As this has been addressed in the previous patch, we can now unify the naming. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: rename block_group_item on-stack accessors to follow namingDavid Sterba
All accessors defined by BTRFS_SETGET_STACK_FUNCS contain _stack_ in the name, the block group ones were not following that scheme, so let's switch them. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: remove embedded block_group_cache::itemDavid Sterba
The members ::used and ::flags are now in the block group cache structure, the last one is chunk_objectid, but that's set to a fixed value and otherwise unused. The item is constructed from a local variable before write, so we can remove the embedded one from block group. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: move block_group_item::flags to block groupDavid Sterba
The flags are read from the item that's embedded to block group struct, but the item will be removed. Use the ::flags after read and before write. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: move block_group_item::used to block groupDavid Sterba
For unknown reasons, the member 'used' in the block group struct is stored in the b-tree item and accessed everywhere using the special accessor helper. Let's unify it and make it a regular member and only update the item before writing it to the tree. The item is still being used for flags and chunk_objectid, there's some duplication until the item is removed in following patches. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: Remove btrfs_bio::flags memberQu Wenruo
The last user of btrfs_bio::flags was removed in commit 326e1dbb5736 ("block: remove management of bi_remaining when restoring original bi_end_io"), remove it. (Tagged for stable as the structure is heavily used and space savings are desirable.) CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add blake2b to checksumming algorithmsDavid Sterba
Add blake2b (with 256 bit digest) to the list of possible checksumming algorithms used by BTRFS. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add member for a specific checksum driverDavid Sterba
Currently all the checksum algorithms generate a fixed size digest size and we use it. The on-disk format can hold up to BTRFS_CSUM_SIZE bytes and BLAKE2b produces digest of 512 bits by default. We can't do that and will use the blake2b-256, this needs to be passed to the crypto API. Separate that from the base algorithm name and add a member to request specific driver, in this case with the digest size. The only place that uses the driver name is the crypto API setup. Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: sysfs: show used checksum driver per filesystemJohannes Thumshirn
Show the used driver for the checksum algorithm for the filesystem in sysfs file /sys/fs/btrfs/UUID/features/checksum, eg. crc32c (crc32c-generic) Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: sysfs: export supported checksumsDavid Sterba
Export supported checksum algorithms via sysfs in the list of static features: /sys/fs/btrfs/features/supported_checksums Space spearated list of checksum algorithm names. Co-developed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add sha256 to checksumming algorithmJohannes Thumshirn
Add sha256 to the list of possible checksumming algorithms used by BTRFS. Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18btrfs: add xxhash64 to checksumming algorithmsJohannes Thumshirn
Add xxhash64 to the list of possible checksumming algorithms used by BTRFS. Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2019-11-18block: sed-opal: Introduce SUM_SET_LIST parameter and append it using ↵Revanth Rajashekar
'add_token_u64' In function 'activate_lsp', rather than hard-coding the short atom header(0x83), we need to let the function 'add_short_atom_header' append the header based on the parameter being appended. The parameter has been defined in Section 3.1.2.1 of https://trustedcomputinggroup.org/wp-content/uploads/TCG_Storage-Opal_Feature_Set_Single_User_Mode_v1-00_r1-00-Final.pdf Reviewed-by: Jon Derrick <jonathan.derrick@intel.com> Signed-off-by: Revanth Rajashekar <revanth.rajashekar@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-18ftrace: Rename ftrace_graph_stub to ftrace_stub_graphSteven Rostedt (VMware)
The ftrace_graph_stub was created and points to ftrace_stub as a way to assign the functon graph tracer function pointer to a stub function with a different prototype than what ftrace_stub has and not trigger the C verifier. The ftrace_graph_stub was created via the linker script vmlinux.lds.h. Unfortunately, powerpc already uses the name ftrace_graph_stub for its internal implementation of the function graph tracer, and even though powerpc would still build, the change via the linker script broke function tracer on powerpc from working. By using the name ftrace_stub_graph, which does not exist anywhere else in the kernel, this should not be a problem. Link: https://lore.kernel.org/r/1573849732.5937.136.camel@lca.pw Fixes: b83b43ffc6e4 ("fgraph: Fix function type mismatches of ftrace_graph_return using ftrace_stub") Reorted-by: Qian Cai <cai@lca.pw> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-11-18ftrace: Add a helper function to modify_ftrace_direct() to allow arch ↵Steven Rostedt (VMware)
optimization If a direct ftrace callback is at a location that does not have any other ftrace helpers attached to it, it is possible to simply just change the text to call the new caller (if the architecture supports it). But this requires special architecture code. Currently, modify_ftrace_direct() uses a trick to add a stub ftrace callback to the location forcing it to call the ftrace iterator. Then it can change the direct helper to call the new function in C, and then remove the stub. Removing the stub will have the location now call the new location that the direct helper is using. The new helper function does the registering the stub trick, but is a weak function, allowing an architecture to override it to do something a bit more direct. Link: https://lore.kernel.org/r/20191115215125.mbqv7taqnx376yed@ast-mbp.dhcp.thefacebook.com Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-11-18ASoC: wm_adsp: Expose mixer control APILi Xu
Expose mixer control API for reading and writing controls from the kernel. This API can be used by ALSA kernel drivers with ADSP support to read and write firmware-defined memory regions. Signed-off-by: Li Xu <li.xu@cirrus.com> Signed-off-by: David Rhodes <david.rhodes@cirrus.com> Acked-by: Charles Keepax <ckeepax@opensource.cirrus.com> Link: https://lore.kernel.org/r/1573847653-17094-2-git-send-email-david.rhodes@cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-18ASoC: tlv320aic31xx: configure output common-mode voltageLucas Stach
The tlv320aic31xx devices allow to adjust the output common-mode voltage for best analog performance. The datasheet states that the common mode voltage should be set to be <= AVDD/2. This changes allows to configure the output common-mode voltage via a DT property. If the property is absent the voltage is automatically chosen as the highest voltage below/equal to AVDD/2. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Link: https://lore.kernel.org/r/20191118151207.28576-1-l.stach@pengutronix.de Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-18ASoC: SOF: Intel: Fix CFL and CML FW nocodec binary names.Liam Girdwood
The manifest information is different between CNL, CML and CFL platforms hence we need to load different files. Signed-off-by: Liam Girdwood <liam.r.girdwood@linux.intel.com> Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20191111222901.19892-3-pierre-louis.bossart@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-18libtraceevent: Fix parsing of event %o and %X argument typesKonstantin Khlebnikov
Add missing "%o" and "%X". Ext4 events use "%o" for printing i_mode. Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Link: http://lore.kernel.org/lkml/157338066113.6548.11461421296091086041.stgit@buzz Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf callchain: Fix segfault in thread__resolve_callchain_sample()Adrian Hunter
Do not dereference 'chain' when it is NULL. $ perf record -e intel_pt//u -e branch-misses:u uname $ perf report --itrace=l --branch-history perf: Segmentation fault Fixes: e9024d519d89 ("perf callchain: Honour the ordering of PERF_CONTEXT_{USER,KERNEL,etc}") Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Link: http://lore.kernel.org/lkml/20191114142538.4097-1-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf map_groups: Auto sort maps by name, if neededArnaldo Carvalho de Melo
There are still lots of lookups by name, even if just when loading vmlinux, till that code is studied to figure out if its possible to do away with those map lookup by names, provide a way to sort it using libc's qsort/bsearch. Doing it at the first lookup defers the sorting a bit, and as the code stands now, is never done for user maps, just for the kernel ones. # perf probe -l # perf probe -x ~/bin/perf -L __map_groups__find_by_name <__map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0> 0 static struct map *__map_groups__find_by_name(struct map_groups *mg, const char *name) 1 { struct map **mapp; 4 if (mg->maps_by_name == NULL && 5 map__groups__sort_by_name_from_rbtree(mg)) 6 return NULL; 8 mapp = bsearch(name, mg->maps_by_name, mg->nr_maps, sizeof(*mapp), map__strcmp_name); 9 if (mapp) 10 return *mapp; 11 return NULL; 12 } struct map *map_groups__find_by_name(struct map_groups *mg, const char *name) { # perf probe -x ~/bin/perf 'found=__map_groups__find_by_name:10 name:string' Added new event: probe_perf:found (on __map_groups__find_by_name:10 in /home/acme/bin/perf with name:string) You can now use it in all perf tools, such as: perf record -e probe_perf:found -aR sleep 1 # # perf probe -x ~/bin/perf -L map_groups__find_by_name <map_groups__find_by_name@/home/acme/git/perf/tools/perf/util/symbol.c:0> 0 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name) 1 { 2 struct maps *maps = &mg->maps; struct map *map; 5 down_read(&maps->lock); 7 if (mg->last_search_by_name && strcmp(mg->last_search_by_name->dso->short_name, name) == 0) { 8 map = mg->last_search_by_name; 9 goto out_unlock; } /* * If we have mg->maps_by_name, then the name isn't in the rbtree, * as mg->maps_by_name mirrors the rbtree when lookups by name are * made. */ 16 map = __map_groups__find_by_name(mg, name); 17 if (map || mg->maps_by_name != NULL) 18 goto out_unlock; /* Fallback to traversing the rbtree... */ 21 maps__for_each_entry(maps, map) 22 if (strcmp(map->dso->short_name, name) == 0) { 23 mg->last_search_by_name = map; 24 goto out_unlock; } 27 map = NULL; out_unlock: 30 up_read(&maps->lock); 31 return map; 32 } int dso__load_vmlinux(struct dso *dso, struct map *map, const char *vmlinux, bool vmlinux_allocated) # perf probe -x ~/bin/perf 'fallback=map_groups__find_by_name:21 name:string' Added new events: probe_perf:fallback (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string) probe_perf:fallback_1 (on map_groups__find_by_name:21 in /home/acme/bin/perf with name:string) You can now use it in all perf tools, such as: perf record -e probe_perf:fallback_1 -aR sleep 1 # # perf probe -l probe_perf:fallback (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string) probe_perf:fallback_1 (on map_groups__find_by_name:21@util/symbol.c in /home/acme/bin/perf with name_string) probe_perf:found (on __map_groups__find_by_name:10@util/symbol.c in /home/acme/bin/perf with name_string) # # perf stat -e probe_perf:* Now run 'perf top' in another term and then, after a while, stop 'perf stat': Furthermore, if we ask for interval printing, we can see that that is done just at the start of the workload: # perf stat -I1000 -e probe_perf:* # time counts unit events 1.000319513 0 probe_perf:found 1.000319513 0 probe_perf:fallback_1 1.000319513 0 probe_perf:fallback 2.001868092 23,251 probe_perf:found 2.001868092 0 probe_perf:fallback_1 2.001868092 0 probe_perf:fallback 3.002901597 0 probe_perf:found 3.002901597 0 probe_perf:fallback_1 3.002901597 0 probe_perf:fallback 4.003358591 0 probe_perf:found 4.003358591 0 probe_perf:fallback_1 4.003358591 0 probe_perf:fallback ^C # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-c5lmbyr14x448rcfii7y6t3k@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18perf machine: No need to check if kernel module maps pre-existArnaldo Carvalho de Melo
We'only populating maps for kernel modules either from perf.data file PERF_RECORD_MMAP records or when parsing /proc/modules, so there is no need to first look if we already have those module maps in the list, that would mean the kernel has duplicate entries. So ditch one use of looking up maps by name. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-gnzjg2hhuz6jnrw91m35059y@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18docs: cgroup: mm: Fix spelling of "list"Chris Down
Signed-off-by: Chris Down <chris@chrisdown.name> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: kernel-team@fb.com Signed-off-by: Tejun Heo <tj@kernel.org>
2019-11-18usb-storage: Disable UAS on JMicron SATA enclosureLaura Abbott
Steve Ellis reported incorrect block sizes and alignement offsets with a SATA enclosure. Adding a quirk to disable UAS fixes the problems. Reported-by: Steven Ellis <sellis@redhat.com> Cc: Pacho Ramos <pachoramos@gmail.com> Signed-off-by: Laura Abbott <labbott@fedoraproject.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-11-18ASoC: SOF: Intel: Fix build breakMark Brown
Commit 130d3e9077 (Fix CFL and CML FW nocodec binary names.) broke the build in some configurations as it depends on changes in the development branch, revert it. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-18blk-cgroup: cgroup_rstat_updated() shouldn't be called on cgroup1Tejun Heo
Currently, cgroup rstat is supported only on cgroup2 hierarchy and rstat functions shouldn't be called on cgroup1 cgroups. While converting blk-cgroup core statistics to rstat, f73316482977 ("blk-cgroup: reimplement basic IO stats using cgroup rstat") accidentally ended up calling cgroup_rstat_updated() on cgroup1 cgroups causing crashes. Longer term, we probably should add cgroup1 support to rstat but for now let's mask the call directly. Fixes: f73316482977 ("blk-cgroup: reimplement basic IO stats using cgroup rstat") Tested-by: Faiz Abbas <faiz_abbas@ti.com> Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-18Revert "bcache: fix fifo index swapping condition in journal_pin_cmp()"Jens Axboe
Coly says: "Guoju Fang talked to me today, he told me this change was unnecessary and I was over-thought. Then I realize fifo_idx() uses a mask to handle the array index overflow condition, so the index swap in journal_pin_cmp() won't happen. And yes, Guoju and Kent are correct. Since you already applied this patch, can you please to remove this patch from your for-next branch? This single patch does not break thing, but it is unecessary at this moment." This reverts commit c0e0954e909c17b43d176ab219fc598964616ae6. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-18scsi: sd_zbc: Remove set but not used variable 'buflen'YueHaibing
Fixes gcc '-Wunused-but-set-variable' warning: drivers/scsi/sd_zbc.c: In function 'sd_zbc_check_zones': drivers/scsi/sd_zbc.c:341:9: warning: variable 'buflen' set but not used [-Wunused-but-set-variable] It is not used since commit d9dd73087a8b ("block: Enhance blk_revalidate_disk_zones()") Reported-by: Hulk Robot <hulkci@huawei.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-18perf/core: Fix the mlock accounting, againAlexander Shishkin
Commit: 5e6c3c7b1ec2 ("perf/aux: Fix tracking of auxiliary trace buffer allocation") tried to guess the correct combination of arithmetic operations that would undo the AUX buffer's mlock accounting, and failed, leaking the bottom part when an allocation needs to be charged partially to both user->locked_vm and mm->pinned_vm, eventually leaving the user with no locked bonus: $ perf record -e intel_pt//u -m1,128 uname [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.061 MB perf.data ] $ perf record -e intel_pt//u -m1,128 uname Permission error mapping pages. Consider increasing /proc/sys/kernel/perf_event_mlock_kb, or try again with a smaller value of -m/--mmap_pages. (current value: 1,128) Fix this by subtracting both locked and pinned counts when AUX buffer is unmapped. Reported-by: Thomas Richter <tmricht@linux.ibm.com> Tested-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-11-18dm thin: wakeup worker only when deferred bios existJeffle Xu
Single thread fio test (read, bs=4k, ioengine=libaio, iodepth=128, numjobs=1) over dm-thin device has poor performance versus bare nvme device. Further investigation with perf indicates that queue_work_on() consumes over 20% CPU time when doing IO over dm-thin device. The call stack is as follows. - 40.57% thin_map + 22.07% queue_work_on + 9.95% dm_thin_find_block + 2.80% cell_defer_no_holder 1.91% inc_all_io_entry.isra.33.part.34 + 1.78% bio_detain.isra.35 In cell_defer_no_holder(), wakeup_worker() is always called, no matter whether the tc->deferred_bio_list list is empty or not. In single thread IO model, this list is most likely empty. So skip waking up worker thread if tc->deferred_bio_list list is empty. Single thread IO performance improves from 448 MiB/s to 646 MiB/s (+44%) once the needless wake_worker() calls are properly skipped. Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2019-11-18drm/i915: fix accidental static variable useJani Nikula
It's supposed to be just a const pointer. Fixes: 074c77e3ec63 ("drm/i915/tgl: Gen-12 display loses Yf tiling and legacy CCS support") Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191115120440.17883-1-jani.nikula@intel.com (cherry picked from commit 48ea97fabe75c83adf4e6ff9262bbda229e6ee73) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2019-11-18drm/i915/guc: Skip suspend/resume GuC action on platforms w/o GuC submissionDon Hiatt
On some platforms (e.g. KBL) that do not support GuC submission, but the user enabled the GuC communication (e.g for HuC authentication) calling the GuC EXIT_S_STATE action results in lose of ability to enter RC6. We can remove the GuC suspend/resume entirely as we do not need to save the GuC submission status. Add intel_guc_submission_is_enabled() function to determine if GuC submission is active. v2: Do not suspend/resume the GuC on platforms that do not support Guc Submission. v3: Fix typo, move suspend logic to remove goto. v4: Use intel_guc_submission_is_enabled() to check GuC submission status. v5: No need to look at engine to determine if submission is enabled. Squash fix + intel_guc_submission_is_enabled() patch into one. v6: Move resume check into intel_guc_resume() for symmetry. Fix commit Fixes tag. Reported-by: KiteStramuort <kitestramuort@autistici.org> Reported-by: S. Zharkoff <s.zharkoff@gmail.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111594 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111623 Fixes: ffd5ce22faa4 ("drm/i915/guc: Updates for GuC 32.0.3 firmware") Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Daniele Ceralo Spurio <daniele.ceraolospurio@intel.com> Cc: Stuart Summers <stuart.summers@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Tomas Janousek <tomi@nomi.cz> Signed-off-by: Don Hiatt <don.hiatt@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191115231538.1249-1-don.hiatt@intel.com (cherry picked from commit 82e0c5bbd6eb1d274b5a3e519ff0ab91f1f8e537) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2019-11-18drm/i915/gt: Wait for new requests in intel_gt_retire_requests()Chris Wilson
Our callers fall into two categories, those passing timeout=0 who just want to flush request retirements and those passing a timeout that need to wait for submission completion (e.g. intel_gt_wait_for_idle()). Currently, we only wait for a snapshot of timelines at the start of the wait (but there was an expectation that new requests would cause timelines to appear at the end). However, our callers, such as intel_gt_wait_for_idle() before suspend, do require us to wait for the power management requests emitted by retirement as well. If we don't, then it takes an extra second or two for the background worker to flush the queue and mark the GT as idle. Fixes: 7e8057626640 ("drm/i915: Drop struct_mutex from around i915_retire_requests()") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191114225736.616885-1-chris@chris-wilson.co.uk (cherry picked from commit 7936a22dd4660d24b4ca0668c02b0372127cab44) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2019-11-18drm/i915: Restore GT coarse power gating workaroundImre Deak
The workaround to disable coarse power gating is still needed on SKL GT3/GT4 machines and since the RC6 context corruption was discovered by the hardware team also on all GEN9 machines. Restore applying the workaround. Fixes: c113236718e8 ("drm/i915: Extract GT render sleep (rc6) management") Testcase: igt/intel_gt_pm_late_selftests/live_rc6_ctx Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Andi Shyti <andi.shyti@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191114152621.7235-1-imre.deak@intel.com (cherry picked from commit 980f87a2edb3e7825949ebd0a7e63ab574c20816) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2019-11-18drm/i915/fbdev: Restore physical addresses for fb_mmap()Chris Wilson
fbdev uses the physical address of our framebuffer for its fb_mmap() routine. While we need to adapt this address for the new io BAR, we have to fix v5.4 first! The simplest fix is to restore the smem back to v5.3 and we will then probably have to implement our fbops->fb_mmap() callback to handle local memory. Reported-by: Neil MacLeod <freedesktop@nmacleod.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112256 Fixes: 5f889b9a61dd ("drm/i915: Disregard drm_mode_config.fb_base") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Tested-by: Neil MacLeod <freedesktop@nmacleod.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191113180633.3947-1-chris@chris-wilson.co.uk (cherry picked from commit abc5520704ab438099fe352636b30b05c1253bea) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2019-11-18drm/i915/perf: don't forget noa wait after oa configLionel Landwerlin
I'm observing incoherence metric values, changing from run to run. It appears the patches introducing noa wait & reconfiguration from command stream switched places in the series multiple times during the review. This lead to the dependency of one onto the order to go missing... Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Fixes: 15d0ace1f876 ("drm/i915/perf: execute OA configuration from command stream") Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191113154639.27144-1-lionel.g.landwerlin@intel.com (cherry picked from commit 93937659dc644f708def8fa58cb63c5c9f499f26) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2019-11-18drm/i915: Avoid atomic context for error captureBruce Chang
io_mapping_map_atomic/kmap_atomic are occasionally taken in error capture (if there is no aperture preallocated for the use of error capture), but the error capture and compression routines are now run in normal context: <3> [113.316247] BUG: sleeping function called from invalid context at mm/page_alloc.c:4653 <3> [113.318190] in_atomic(): 1, irqs_disabled(): 0, pid: 678, name: debugfs_test <4> [113.319900] no locks held by debugfs_test/678. <3> [113.321002] Preemption disabled at: <4> [113.321130] [<ffffffffa02506d4>] i915_error_object_create+0x494/0x610 [i915] <4> [113.327259] Call Trace: <4> [113.327871] dump_stack+0x67/0x9b <4> [113.328683] ___might_sleep+0x167/0x250 <4> [113.329618] __alloc_pages_nodemask+0x26b/0x1110 <4> [113.334614] pool_alloc.constprop.19+0x14/0x60 [i915] <4> [113.335951] compress_page+0x7c/0x100 [i915] <4> [113.337110] i915_error_object_create+0x4bd/0x610 [i915] <4> [113.338515] i915_capture_gpu_state+0x384/0x1680 [i915] However, it is not a good idea to run the slow compression inside atomic context, so we choose not to. Fixes: 895d8ebeaa924 ("drm/i915: error capture with no ggtt slot") Signed-off-by: Bruce Chang <yu.bruce.chang@intel.com> Reviewed-by: Brian Welty <brian.welty@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191113231104.24208-1-yu.bruce.chang@intel.com (cherry picked from commit 48715f7001742e0d1cb20cffab1a0d75f5f7ad72) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2019-11-18drm/i915/display: Fix TRANS_DDI_MST_TRANSPORT_SELECT definitionJosé Roberto de Souza
TRANS_DDI_MST_TRANSPORT_SELECT is 2 bits wide not 3, it was taking one bit from EDP/DSI Input Select. Fixes: b3545e086877 ("drm/i915/tgl: add support to one DP-MST stream") Cc: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191107214559.77087-1-jose.souza@intel.com (cherry picked from commit bb747fa5a9cbf561e5a30649f360feb9e6855645) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>