summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2018-07-10Merge branch 'drm-next-4.19' of git://people.freedesktop.org/~agd5f/linux ↵Dave Airlie
into drm-next More features for 4.19: - Use core pcie functionality rather than duplicating our own for pcie gens and lanes - Scheduler function naming cleanups - More documentation - Reworked DC/Powerplay interfaces to improve power savings - Initial stutter mode support for RV (power feature) - Vega12 powerplay updates - GFXOFF fixes - Misc fixes Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180705221447.2807-1-alexander.deucher@amd.com
2018-07-09Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid Pull HID fixes from Jiri Kosina: - spectrev1 pattern fix in hiddev from Gustavo A. R. Silva - bounds check fix for hid-debug from Daniel Rosenberg - regression fix for HID autobinding from Benjamin Tissoires - removal of excessive logging from i2c-hid driver from Jason Andryuk - fix specific to 2nd generation of Wacom Intuos devices from Jason Gerecke * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid: HID: hiddev: fix potential Spectre v1 HID: i2c-hid: Fix "incomplete report" noise HID: wacom: Correct touch maximum XY of 2nd-gen Intuos HID: debug: check length before copy_to_user() HID: core: allow concurrent registration of drivers
2018-07-09netfilter: fix use-after-free in NF_HOOK_LISTEdward Cree
nf_hook() can free the skb, so we need to remove it from the list before calling, and add passed skbs to a sublist afterwards. Fixes: 17266ee93984 ("net: ipv4: listified version of ip_rcv") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09net: allow fallback function to pass netdevAlexander Duyck
For most of these calls we can just pass NULL through to the fallback function as the sb_dev. The only cases where we cannot are the cases where we might be dealing with either an upper device or a driver that would have configured things to support an sb_dev itself. The only driver that has any significant change in this patch set should be ixgbe as we can drop the redundant functionality that existed in both the ndo_select_queue function and the fallback function that was passed through to us. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-07-09net: allow ndo_select_queue to pass netdevAlexander Duyck
This patch makes it so that instead of passing a void pointer as the accel_priv we instead pass a net_device pointer as sb_dev. Making this change allows us to pass the subordinate device through to the fallback function eventually so that we can keep the actual code in the ndo_select_queue call as focused on possible on the exception cases. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-07-09net: Add generic ndo_select_queue functionsAlexander Duyck
This patch adds a generic version of the ndo_select_queue functions for either returning 0 or selecting a queue based on the processor ID. This is generally meant to just reduce the number of functions we have to change in the future when we have to deal with ndo_select_queue changes. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-07-09net: Add support for subordinate traffic classes to netdev_pick_txAlexander Duyck
This change makes it so that we can support the concept of subordinate device traffic classes to the core networking code. In doing this we can start pulling out the driver specific bits needed to support selecting a queue based on an upper device. The solution at is currently stands is only partially implemented. I have the start of some XPS bits in here, but I would still need to allow for configuration of the XPS maps on the queues reserved for the subordinate devices. For now I am using the reference to the sb_dev XPS map as just a way to skip the lookup of the lower device XPS map for now as that would result in the wrong queue being picked. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-07-09net: Add support for subordinate device traffic classesAlexander Duyck
This patch is meant to provide the basic tools needed to allow us to create subordinate device traffic classes. The general idea here is to allow subdividing the queues of a device into queue groups accessible through an upper device such as a macvlan. The idea here is to enforce the idea that an upper device has to be a single queue device, ideally with IFF_NO_QUQUE set. With that being the case we can pretty much guarantee that the tc_to_txq mappings and XPS maps for the upper device are unused. As such we could reuse those in order to support subdividing the lower device and distributing those queues between the subordinate devices. In order to distinguish between a regular set of traffic classes and if a device is carrying subordinate traffic classes I changed num_tc from a u8 to a s16 value and use the negative values to represent the subordinate pool values. So starting at -1 and running to -32768 we can encode those as pool values, and the existing values of 0 to 15 can be maintained. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-07-09dmaengine: add support for reporting pause and resume separatelyMarek Szyprowski
'cmd_pause' DMA channel capability means that respective DMA engine supports both pausing and resuming given DMA channel. However, in some cases it is important to know if DMA channel can be paused without the need to resume it. This is a typical requirement for proper residue reading on transfer timeout in UART drivers. There are also some DMA engines with limited hardware, which doesn't really support resuming. Reporting pause and resume capabilities separately allows UART drivers to properly check for the really required capabilities and operate in DMA mode also in systems with limited DMA hardware. On the other hand drivers, which rely on full channel suspend/resume support, should now check for both 'pause' and 'resume' features. Existing clients of dma_get_slave_caps() have been checked and the only driver which rely on proper channel resuming is soc-generic-dmaengine-pcm driver, which has been updated to check the newly added capability. Existing 'cmd_pause' now only indicates that DMA engine support pausing given DMA channel. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-07-09block: introduce blk-iolatency io controllerJosef Bacik
Current IO controllers for the block layer are less than ideal for our use case. The io.max controller is great at hard limiting, but it is not work conserving. This patch introduces io.latency. You provide a latency target for your group and we monitor the io in short windows to make sure we are not exceeding those latency targets. This makes use of the rq-qos infrastructure and works much like the wbt stuff. There are a few differences from wbt - It's bio based, so the latency covers the whole block layer in addition to the actual io. - We will throttle all IO types that comes in here if we need to. - We use the mean latency over the 100ms window. This is because writes can be particularly fast, which could give us a false sense of the impact of other workloads on our protected workload. - By default there's no throttling, we set the queue_depth to INT_MAX so that we can have as many outstanding bio's as we're allowed to. Only at throttle time do we pay attention to the actual queue depth. - We backcharge cgroups for root cg issued IO and induce artificial delays in order to deal with cases like metadata only or swap heavy workloads. In testing this has worked out relatively well. Protected workloads will throttle noisy workloads down to 1 io at time if they are doing normal IO on their own, or induce up to a 1 second delay per syscall if they are doing a lot of root issued IO (metadata/swap IO). Our testing has revolved mostly around our production web servers where we have hhvm (the web server application) in a protected group and everything else in another group. We see slightly higher requests per second (RPS) on the test tier vs the control tier, and much more stable RPS across all machines in the test tier vs the control tier. Another test we run is a slow memory allocator in the unprotected group. Before this would eventually push us into swap and cause the whole box to die and not recover at all. With these patches we see slight RPS drops (usually 10-15%) before the memory consumer is properly killed and things recover within seconds. Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09blk-rq-qos: refactor out common elements of blk-wbtJosef Bacik
blkcg-qos is going to do essentially what wbt does, only on a cgroup basis. Break out the common code that will be shared between blkcg-qos and wbt into blk-rq-qos.* so they can both utilize the same infrastructure. Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09memcontrol: schedule throttling if we are congestedTejun Heo
Memory allocations can induce swapping via kswapd or direct reclaim. If we are having IO done for us by kswapd and don't actually go into direct reclaim we may never get scheduled for throttling. So instead check to see if our cgroup is congested, and if so schedule the throttling. Before we return to user space the throttling stuff will only throttle if we actually required it. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09blkcg: add generic throttling mechanismJosef Bacik
Since IO can be issued from literally anywhere it's almost impossible to do throttling without having some sort of adverse effect somewhere else in the system because of locking or other dependencies. The best way to solve this is to do the throttling when we know we aren't holding any other kernel resources. Do this by tracking throttling in a per-blkg basis, and if we require throttling flag the task that it needs to check before it returns to user space and possibly sleep there. This is to address the case where a process is doing work that is generating IO that can't be throttled, whether that is directly with a lot of REQ_META IO, or indirectly by allocating so much memory that it is swamping the disk with REQ_SWAP. We can't use task_add_work as we don't want to induce a memory allocation in the IO path, so simply saving the request queue in the task and flagging it to do the notify_resume thing achieves the same result without the overhead of a memory allocation. Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09swap,blkcg: issue swap io with the appropriate contextTejun Heo
For backcharging we need to know who the page belongs to when swapping it out. We don't worry about things that do ->rw_page (zram etc) at the moment, we're only worried about pages that actually go to a block device. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09blk: introduce REQ_SWAPJosef Bacik
Just like REQ_META, it's important to know the IO coming down is swap in order to guard against potential IO priority inversion issues with cgroups. Add REQ_SWAP and use it for all swap IO, and add it to our bio_issue_as_root_blkg helper. Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09blk-cgroup: allow controllers to output their own statsJosef Bacik
blk-iolatency has a few stats that it would like to print out, and instead of adding a bunch of crap to the generic code just provide a helper so that controllers can add stuff to the stat line if they want to. Hide it behind a boot option since it changes the output of io.stat from normal, and these stats are only interesting to developers. Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09block: introduce bio_issue_as_root_blkgJosef Bacik
Instead of forcing all file systems to get the right context on their bio's, simply check for REQ_META to see if we need to issue as the root blkg. We don't want to force all bio's to have the root blkg associated with them if REQ_META is set, as some controllers (blk-iolatency) need to know who the originating cgroup is so it can backcharge them for the work they are doing. This helper will make sure that the controllers do the proper thing wrt the IO priority and backcharging. Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09block: add bi_blkg to the bio for cgroupsJosef Bacik
Currently io.low uses a bi_cg_private to stash its private data for the blkg, however other blkcg policies may want to use this as well. Since we can get the private data out of the blkg, move this to bi_blkg in the bio and make it generic, then we can use bio_associate_blkg() to attach the blkg to the bio. Theoretically we could simply replace the bi_css with this since we can get to all the same information from the blkg, however you have to lookup the blkg, so for example wbc_init_bio() would have to lookup and possibly allocate the blkg for the css it was trying to attach to the bio. This could be problematic and result in us either not attaching the css at all to the bio, or falling back to the root blkcg if we are unable to allocate the corresponding blkg. So for now do this, and in the future if possible we could just replace the bi_css with bi_blkg and update the helpers to do the correct translation. Signed-off-by: Josef Bacik <jbacik@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09blk-mq: dequeue request one by one from sw queue if hctx is busyMing Lei
It won't be efficient to dequeue request one by one from sw queue, but we have to do that when queue is busy for better merge performance. This patch takes the Exponential Weighted Moving Average(EWMA) to figure out if queue is busy, then only dequeue request one by one from sw queue when queue is busy. Fixes: b347689ffbca ("blk-mq-sched: improve dispatching from sw queue") Cc: Kashyap Desai <kashyap.desai@broadcom.com> Cc: Laurence Oberman <loberman@redhat.com> Cc: Omar Sandoval <osandov@fb.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Bart Van Assche <bart.vanassche@wdc.com> Cc: Hannes Reinecke <hare@suse.de> Reported-by: Kashyap Desai <kashyap.desai@broadcom.com> Tested-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set()Ming Lei
We have to remove synchronize_rcu() from blk_queue_cleanup(), otherwise long delay can be caused during lun probe. For removing it, we have to avoid to iterate the set->tag_list in IO path, eg, blk_mq_sched_restart(). This patch reverts 5b79413946d (Revert "blk-mq: don't handle TAG_SHARED in restart"). Given we have fixed enough IO hang issue, and there isn't any reason to restart all queues in one tags any more, see the following reasons: 1) blk-mq core can deal with shared-tags case well via blk_mq_get_driver_tag(), which can wake up queues waiting for driver tag. 2) SCSI is a bit special because it may return BLK_STS_RESOURCE if queue, target or host is ready, but SCSI built-in restart can cover all these well, see scsi_end_request(), queue will be rerun after any request initiated from this host/target is completed. In my test on scsi_debug(8 luns), this patch may improve IOPS by 20% ~ 30% when running I/O on these 8 luns concurrently. Fixes: 705cda97ee3a ("blk-mq: Make it safe to use RCU to iterate over blk_mq_tag_set.tag_list") Cc: Omar Sandoval <osandov@fb.com> Cc: Bart Van Assche <bart.vanassche@wdc.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: linux-scsi@vger.kernel.org Reported-by: Andrew Jones <drjones@redhat.com> Tested-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09blk-mq: introduce new lock for protecting hctx->dispatch_waitMing Lei
Now hctx->lock is only acquired when adding hctx->dispatch_wait to one wait queue, but not held when removing it from the wait queue. IO hang can be observed easily if SCHED RESTART is disabled, that means now RESTART exits just for fixing the issue in blk_mq_mark_tag_wait(). This patch fixes the issue by introducing hctx->dispatch_wait_lock and holding it for removing hctx->dispatch_wait in blk_mq_dispatch_wake(), since we need to avoid acquiring hctx->lock in irq context. Fixes: eb619fdb2d4cb8b3d3419 ("blk-mq: fix issue with shared tag queue re-running") Cc: Christoph Hellwig <hch@lst.de> Cc: Omar Sandoval <osandov@fb.com> Cc: Bart Van Assche <bart.vanassche@wdc.com> Tested-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09block: Make struct request_queue smaller for CONFIG_BLK_DEV_ZONED=nBart Van Assche
Exclude zoned block device members from struct request_queue for CONFIG_BLK_DEV_ZONED == n. Avoid breaking the build by only building the code that uses these struct request_queue members if CONFIG_BLK_DEV_ZONED != n. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Cc: Matias Bjorling <mb@lightnvm.io> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09block: Inline blk_queue_nr_zones()Bart Van Assche
Since the implementation of blk_queue_nr_zones() is trivial and since it only has a single caller, inline this function. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Cc: Matias Bjorling <mb@lightnvm.io> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09block: Remove bdev_nr_zones()Bart Van Assche
Remove this function since it has no callers. This function was introduced in commit 6cc77e9cb080 ("block: introduce zoned block devices zone write locking"). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Matias Bjorling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-07-09printk/nmi: Prevent deadlock when accessing the main log buffer in NMIPetr Mladek
The commit 719f6a7040f1bdaf96 ("printk: Use the main logbuf in NMI when logbuf_lock is available") brought back the possible deadlocks in printk() and NMI. The check of logbuf_lock is done only in printk_nmi_enter() to prevent mixed output. But another CPU might take the lock later, enter NMI, and: + Both NMIs might be serialized by yet another lock, for example, the one in nmi_cpu_backtrace(). + The other CPU might get stopped in NMI, see smp_send_stop() in panic(). The only safe solution is to use trylock when storing the message into the main log-buffer. It might cause reordering when some lines go to the main lock buffer directly and others are delayed via the per-CPU buffer. It means that it is not useful in general. This patch replaces the problematic NMI deferred context with NMI direct context. It can be used to mark a code that might produce many messages in NMI and the risk of losing them is more critical than problems with eventual reordering. The context is then used when dumping trace buffers on oops. It was the primary motivation for the original fix. Also the reordering is even smaller issue there because some traces have their own time stamps. Finally, nmi_cpu_backtrace() need not longer be serialized because it will always us the per-CPU buffers again. Fixes: 719f6a7040f1bdaf96 ("printk: Use the main logbuf in NMI when logbuf_lock is available") Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20180627142028.11259-1-pmladek@suse.com To: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
2018-07-09pinctrl: Document pin_config_group_get() return codes like pin_config_get()Douglas Anderson
The pinconf_generic_dump_one() function makes the assumption that pin_config_group_get() should return -EINVAL and -ENOTSUPP just like pin_config_get() does. Document that so it's more obvious. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2018-07-09driver core: Add flag to autoremove device link on supplier unbindVivek Gautam
Add a flag to autoremove the device links on supplier driver unbind. This obviates the need to explicitly delete the link in the remove path. We remove these links only when the supplier's link to its consumers has gone to DL_STATE_SUPPLIER_UNBIND state. Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org> Suggested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-07-09driver core: Rename flag AUTOREMOVE to AUTOREMOVE_CONSUMERVivek Gautam
Now that we want to add another flag to autoremove the device link on supplier unbind, it's fair to rename the existing flag from DL_FLAG_AUTOREMOVE to DL_FLAG_AUTOREMOVE_CONSUMER so that we can add similar flag for supplier later. And, while we are touching device.h, fix a doc build warning. Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-07-09PM / Domains: Introduce dev_pm_domain_attach_by_name()Ulf Hansson
For the multiple PM domain case, let's introduce a new API called dev_pm_domain_attach_by_name(). This allows a consumer driver to associate its device with one of its PM domains, by using a name based lookup. Do note that, currently it's only genpd that supports multiple PM domains per device, but dev_pm_domain_attach_by_name() can easily by extended to cover other PM domain types, if/when needed. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Rajendra Nayak <rnayak@codeaurora.org> Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-07-09PM / Domains: Introduce option to attach a device by name to genpdUlf Hansson
For the multiple PM domain case, let's introduce a new function called genpd_dev_pm_attach_by_name(). This allows a device to be associated with its PM domain through genpd, by using a name based lookup. Note that, genpd_dev_pm_attach_by_name() shall only be called by the driver core / PM core, similar to how the existing dev_pm_domain_attach_by_id() makes use of genpd_dev_pm_attach_by_id(). However, this is implemented by following changes on top. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Rajendra Nayak <rnayak@codeaurora.org> Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-07-09bpf: include errno.h from bpf-cgroup.hRoman Gushchin
Commit fdb5c4531c1e ("bpf: fix attach type BPF_LIRC_MODE2 dependency wrt CONFIG_CGROUP_BPF") caused some build issues, detected by 0-DAY kernel test infrastructure. The problem is that cgroup_bpf_prog_attach/detach/query() functions can return -EINVAL error code, which is not defined. Fix this adding errno.h to includes. Fixes: fdb5c4531c1e ("bpf: fix attach type BPF_LIRC_MODE2 dependency wrt CONFIG_CGROUP_BPF") Signed-off-by: Roman Gushchin <guro@fb.com> Cc: Sean Young <sean@mess.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-08fs: shave 8 bytes off of struct inodeAmir Goldstein
Here is a link to Linus' reply to Jan's concern about making i_blkbibts byte addressable: https://marc.info/?l=linux-fsdevel&m=152882624707975&w=2 Here is a link to an lkp.org report about potential performance improvement in some workload, which could(?) be related to packing i_blkbits closer to i_bytes/i_lock: https://marc.info/?l=linux-fsdevel&m=153077048108198&w=2 Changes since v1: - Add links to relevant discussions Signed-off-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-07-08Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: - Prevent an out-of-bounds access in mtrr_write() - Break a circular dependency in the new hyperv IPI acceleration code - Address the build breakage related to inline functions by enforcing gnu_inline and explicitly bringing native_save_fl() out of line, which also adds a set of _ARM_ARG macros which provide 32/64bit safety. - Initialize the shadow CR4 per cpu variable before using it. * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mtrr: Don't copy out-of-bounds data in mtrr_write x86/hyper-v: Fix the circular dependency in IPI enlightenment x86/paravirt: Make native_save_fl() extern inline x86/asm: Add _ASM_ARG* constants for argument registers to <asm/asm.h> compiler-gcc.h: Add __attribute__((gnu_inline)) to all inline declarations x86/mm/32: Initialize the CR4 shadow before __flush_tlb_all()
2018-07-08Merge branch 'sched-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Thomas Gleixner: - The hopefully final fix for the reported race problems in kthread_parkme(). The previous attempt still left a hole and was partially wrong. - Plug a race in the remote tick mechanism which triggers a warning about updates not being done correctly. That's a false positive if the race condition is hit as the remote CPU is idle. Plug it by checking the condition again when holding run queue lock. - Fix a bug in the utilization estimation of a run queue which causes the estimation to be 0 when a run queue is throttled. - Advance the global expiration of the period timer when the timer is restarted after a idle period. Otherwise the expiry time is stale and the timer fires prematurely. - Cure the drift between the bandwidth timer and the runqueue accounting, which leads to bogus throttling of runqueues - Place the call to cpufreq_update_util() correctly so the function will observe the correct number of running RT tasks and not a stale one. * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: kthread, sched/core: Fix kthread_parkme() (again...) sched/util_est: Fix util_est_dequeue() for throttled cfs_rq sched/fair: Advance global expiration when period timer is restarted sched/fair: Fix bandwidth timer clock drift condition sched/rt: Fix call to cpufreq_update_util() sched/nohz: Skip remote tick on idle task entirely
2018-07-08Merge tag 'soc_drivers_for_4.19' of ↵Olof Johansson
git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into next/drivers Keystone SOC driver update for 4.19 - Add suspend/resume functionality to TI EMIF SRAM driver - Add wakeup M3 RTC self refresh support - Fix for the PM runtime ifdefs * tag 'soc_drivers_for_4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone: soc: ti: wkup_m3_ipc: mark PM functions as __maybe_unused soc: ti: wkup_m3_ipc: Add wkup_m3_request_wake_src soc: ti: wkup_m3_ipc: Add rtc_only with ddr in self refresh mode support memory: ti-emif-sram: Add resume function to recopy sram code Signed-off-by: Olof Johansson <olof@lixom.net>
2018-07-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Alexei Starovoitov says: ==================== pull-request: bpf 2018-07-07 The following pull-request contains BPF updates for your *net* tree. Plenty of fixes for different components: 1) A set of critical fixes for sockmap and sockhash, from John Fastabend. 2) fixes for several race conditions in af_xdp, from Magnus Karlsson. 3) hash map refcnt fix, from Mauricio Vasquez. 4) samples/bpf fixes, from Taeung Song. 5) ifup+mtu check for xdp_redirect, from Toshiaki Makita. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-08openvswitch: kernel datapath clone actionYifeng Sun
Add 'clone' action to kernel datapath by using existing functions. When actions within clone don't modify the current flow, the flow key is not cloned before executing clone actions. This is a follow up patch for this incomplete work: https://patchwork.ozlabs.org/patch/722096/ v1 -> v2: Refactor as advised by reviewer. Signed-off-by: Yifeng Sun <pkusunyifeng@gmail.com> Signed-off-by: Andy Zhou <azhou@ovn.org> Acked-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-07xdp: XDP_REDIRECT should check IFF_UP and MTUToshiaki Makita
Otherwise we end up with attempting to send packets from down devices or to send oversized packets, which may cause unexpected driver/device behaviour. Generic XDP has already done this check, so reuse the logic in native XDP. Fixes: 814abfabef3c ("xdp: add bpf_redirect helper function") Signed-off-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-07-07headers: separate linux/mod_devicetable.h from linux/platform_device.hRandy Dunlap
At over 4000 #includes, <linux/platform_device.h> is the 9th most #included header file in the Linux kernel. It does not need <linux/mod_devicetable.h>, so drop that header and explicitly add <linux/mod_devicetable.h> to source files that need it. 4146 #include <linux/platform_device.h> After this patch, there are 225 files that use <linux/mod_devicetable.h>, for a reduction of around 3900 times that <linux/mod_devicetable.h> does not have to be read & parsed. 225 #include <linux/mod_devicetable.h> This patch was build-tested on 20 different arch-es. It also makes these drivers SubmitChecklist#1 compliant. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kbuild test robot <lkp@intel.com> # drivers/media/platform/vimc/ Reported-by: kbuild test robot <lkp@intel.com> # drivers/pinctrl/pinctrl-u300.c Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-07linux/device.h: fix kernel-doc notation warningRandy Dunlap
Fix kernel-doc build warning (missing " *" at beginning of line): ../include/linux/device.h:93: warning: bad line: this bus. Fixes: 07397df29e57c ("dma-mapping: move dma configuration to bus infrastructure") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Nipun Gupta <nipun.gupta@nxp.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-07slimbus: stream: add stream supportSrinivas Kandagatla
This patch adds support to SLIMbus stream apis for slimbus device. SLIMbus streaming involves adding support to Data Channel Management and channel Reconfiguration Messages to slim core plus few stream apis. >From slim device side the apis are very simple mostly inline with other stream apis. Currently it only supports Isochronous and Push/Pull transport protocols, which are sufficient for audio use cases. Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-07slimbus: core: rearrange slim_eaddr structureSrinivas Kandagatla
Rearrange struct slim_eaddr so that the structure is packed correctly to be able to send in SLIMBus messages. Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-07slimbus: core: add of_slim_device_get() helperSrinivas Kandagatla
On SLIMBus controllers like Qcom NGD(non ported device), controller can request logical address once the remote side is powered, having a helper function like this to explicitly enumerate the bus is helpful. Also codec drivers which are taking to interface device would need such a helper too. Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-07Merge tag 'fsi-updates-2018-06-12' of ↵Greg Kroah-Hartman
https://git.kernel.org/pub/scm/linux/kernel/git/benh/linux-fsi into char-misc-next Ben writes: FSI updates and sbefifo driver
2018-07-07uio: change to use the mutex lock instead of the spin lockXiubo Li
We are hitting a regression with the following commit: commit a93e7b331568227500186a465fee3c2cb5dffd1f Author: Hamish Martin <hamish.martin@alliedtelesis.co.nz> Date: Mon May 14 13:32:23 2018 +1200 uio: Prevent device destruction while fds are open The problem is the addition of spin_lock_irqsave in uio_write. This leads to hitting uio_write -> copy_from_user -> _copy_from_user -> might_fault and the logs filling up with sleeping warnings. I also noticed some uio drivers allocate memory, sleep, grab mutexes from callouts like open() and release and uio is now doing spin_lock_irqsave while calling them. Reported-by: Mike Christie <mchristi@redhat.com> CC: Hamish Martin <hamish.martin@alliedtelesis.co.nz> Reviewed-by: Hamish Martin <hamish.martin@alliedtelesis.co.nz> Signed-off-by: Xiubo Li <xiubli@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-07net: bridge: fix br_vlan_get_{pvid,info} return valuesArnd Bergmann
These two functions return the regular -EINVAL failure in the normal code path, but return a nonstandard '-1' error otherwise, which gets interpreted as -EPERM. Let's change it to -EINVAL for the dummy functions as well. Fixes: 4d4fd36126d6 ("net: bridge: Publish bridge accessor functions") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-07lib: reciprocal_div: implement the improved algorithm on the paper mentionedJiong Wang
The new added "reciprocal_value_adv" implements the advanced version of the algorithm described in Figure 4.2 of the paper except when "divisor > (1U << 31)" whose ceil(log2(d)) result will be 32 which then requires u128 divide on host. The exception case could be easily handled before calling "reciprocal_value_adv". The advanced version requires more complex calculation to get the reciprocal multiplier and other control variables, but then could reduce the required emulation operations. It makes no sense to use this advanced version for host divide emulation, those extra complexities for calculating multiplier etc could completely waive our saving on emulation operations. However, it makes sense to use it for JIT divide code generation (for example eBPF JIT backends) for which we are willing to trade performance of JITed code with that of host. As shown by the following pseudo code, the required emulation operations could go down from 6 (the basic version) to 3 or 4. To use the result of "reciprocal_value_adv", suppose we want to calculate n/d, the C-style pseudo code will be the following, it could be easily changed to real code generation for other JIT targets. struct reciprocal_value_adv rvalue; u8 pre_shift, exp; // handle exception case. if (d >= (1U << 31)) { result = n >= d; return; } rvalue = reciprocal_value_adv(d, 32) exp = rvalue.exp; if (rvalue.is_wide_m && !(d & 1)) { // floor(log2(d & (2^32 -d))) pre_shift = fls(d & -d) - 1; rvalue = reciprocal_value_adv(d >> pre_shift, 32 - pre_shift); } else { pre_shift = 0; } // code generation starts. if (imm == 1U << exp) { result = n >> exp; } else if (rvalue.is_wide_m) { // pre_shift must be zero when reached here. t = (n * rvalue.m) >> 32; result = n - t; result >>= 1; result += t; result >>= rvalue.sh - 1; } else { if (pre_shift) result = n >> pre_shift; result = ((u64)result * rvalue.m) >> 32; result >>= rvalue.sh; } Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-06vfs: dedupe: rationalize argsMiklos Szeredi
Clean up f_op->dedupe_file_range() interface. 1) Use loff_t for offsets and length instead of u64 2) Order the arguments the same way as {copy|clone}_file_range(). Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
2018-07-06vfs: dedupe: return intMiklos Szeredi
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
2018-07-06device: Add #define dev_fmt similar to #define pr_fmtJoe Perches
Add a prefixing macro to dev_<level> uses similar to the pr_fmt prefixing macro used in pr_<level> calls. This can help avoid some string duplication in dev_<level> uses. The default, like pr_fmt, is an empty #define dev_fmt(fmt) fmt Rename the existing dev_<level> functions to _dev_<level> and introduce #define dev_<level> _dev_<level> macros that use the new #define dev_fmt Miscellanea: o Consistently use #defines with fmt, ... and ##__VA_ARGS__ o Remove unnecessary externs Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>