summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2018-12-08net: phy: mdio-gpio: Add platform_data support for phy_maskAndrew Lunn
It is sometimes necessary to instantiate a bit-banging MDIO bus as a platform device, without the aid of device tree. When device tree is being used, the bus is not scanned for devices, only those devices which are in device tree are probed. Without device tree, by default, all addresses on the bus are scanned. This may then find a device which is not a PHY, e.g. a switch. And the switch may have registers containing values which look like a PHY. So during the scan, a PHY device is wrongly created. After the bus has been registered, a search is made for mdio_board_info structures which indicates devices on the bus, and the driver which should be used for them. This is typically used to instantiate Ethernet switches from platform drivers. However, if the scanning of the bus has created a PHY device at the same location as indicated into the board info for a switch, the switch device is not created, since the address is already busy. This can be avoided by setting the phy_mask of the mdio bus. This mask prevents addresses on the bus being scanned. v2 Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-08tracing: Lock event_mutex before synth_event_mutexMasami Hiramatsu
synthetic event is using synth_event_mutex for protecting synth_event_list, and event_trigger_write() path acquires locks as below order. event_trigger_write(event_mutex) ->trigger_process_regex(trigger_cmd_mutex) ->event_hist_trigger_func(synth_event_mutex) On the other hand, synthetic event creation and deletion paths call trace_add_event_call() and trace_remove_event_call() which acquires event_mutex. In that case, if we keep the synth_event_mutex locked while registering/unregistering synthetic events, its dependency will be inversed. To avoid this issue, current synthetic event is using a 2 phase process to create/delete events. For example, it searches existing events under synth_event_mutex to check for event-name conflicts, and unlocks synth_event_mutex, then registers a new event under event_mutex locked. Finally, it locks synth_event_mutex and tries to add the new event to the list. But it can introduce complexity and a chance for name conflicts. To solve this simpler, this introduces trace_add_event_call_nolock() and trace_remove_event_call_nolock() which don't acquire event_mutex inside. synthetic event can lock event_mutex before synth_event_mutex to solve the lock dependency issue simpler. Link: http://lkml.kernel.org/r/154140844377.17322.13781091165954002713.stgit@devbox Reviewed-by: Tom Zanussi <tom.zanussi@linux.intel.com> Tested-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-12-08ring-buffer: Add percentage of ring buffer full to wake up readerSteven Rostedt (VMware)
Instead of just waiting for a page to be full before waking up a pending reader, allow the reader to pass in a "percentage" of pages that have content before waking up a reader. This should help keep the process of reading the events not cause wake ups that constantly cause reading of the buffer. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-12-08function_graph: Have profiler use new helper ftrace_graph_get_ret_stack()Steven Rostedt (VMware)
The ret_stack processing is going to change, and that is going to break anything that is accessing the ret_stack directly. One user is the function graph profiler. By using the ftrace_graph_get_ret_stack() helper function, the profiler can access the ret_stack entry without relying on the implementation details of the stack itself. Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-12-08fgraph: Add new fgraph_ops structure to enable function graph hooksSteven Rostedt (VMware)
Currently the registering of function graph is to pass in a entry and return function. We need to have a way to associate those functions together where the entry can determine to run the return hook. Having a structure that contains both functions will facilitate the process of converting the code to be able to do such. This is similar to the way function hooks are enabled (it passes in ftrace_ops). Instead of passing in the functions to use, a single structure is passed in to the registering function. The unregister function is now passed in the fgraph_ops handle. When we allow more than one callback to the function graph hooks, this will let the system know which one to remove. Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-12-08function_graph: Remove the use of FTRACE_NOTRACE_DEPTHSteven Rostedt (VMware)
The curr_ret_stack is no longer set to a negative value when a function is not to be traced by the function graph tracer. Remove the usage of FTRACE_NOTRACE_DEPTH, as it is no longer needed. Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2018-12-08Revert "mm, thp: consolidate THP gfp handling into ↵David Rientjes
alloc_hugepage_direct_gfpmask" This reverts commit 89c83fb539f95491be80cdd5158e6f0ce329e317. This should have been done as part of 2f0799a0ffc0 ("mm, thp: restore node-local hugepage allocations"). The movement of the thp allocation policy from alloc_pages_vma() to alloc_hugepage_direct_gfpmask() was intended to only set __GFP_THISNODE for mempolicies that are not MPOL_BIND whereas the revert could set this regardless of mempolicy. While the check for MPOL_BIND between alloc_hugepage_direct_gfpmask() and alloc_pages_vma() was racy, that has since been removed since the revert. What is left is the possibility to use __GFP_THISNODE in policy_node() when it is unexpected because the special handling for hugepages in alloc_pages_vma() was removed as part of the consolidation. Secondly, prior to 89c83fb539f9, alloc_pages_vma() implemented a somewhat different policy for hugepage allocations, which were allocated through alloc_hugepage_vma(). For hugepage allocations, if the allocating process's node is in the set of allowed nodes, allocate with __GFP_THISNODE for that node (for MPOL_PREFERRED, use that node with __GFP_THISNODE instead). This was changed for shmem_alloc_hugepage() to allow fallback to other nodes in 89c83fb539f9 as it did for new_page() in mm/mempolicy.c which is functionally different behavior and removes the requirement to only allocate hugepages locally. So this commit does a full revert of 89c83fb539f9 instead of the partial revert that was done in 2f0799a0ffc0. The result is the same thp allocation policy for 4.20 that was in 4.19. Fixes: 89c83fb539f9 ("mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask") Fixes: 2f0799a0ffc0 ("mm, thp: restore node-local hugepage allocations") Signed-off-by: David Rientjes <rientjes@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-08x86/kernel: Fix more -Wmissing-prototypes warningsBorislav Petkov
... with the goal of eventually enabling -Wmissing-prototypes by default. At least on x86. Make functions static where possible, otherwise add prototypes or make them visible through includes. asm/trace/ changes courtesy of Steven Rostedt <rostedt@goodmis.org>. Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> # ACPI + cpufreq bits Cc: Andrew Banman <andrew.banman@hpe.com> Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mike Travis <mike.travis@hpe.com> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yi Wang <wang.yi59@zte.com.cn> Cc: linux-acpi@vger.kernel.org
2018-12-07nvme: implement Enhanced Command RetryKeith Busch
A controller may have an internal state that is not able to successfully process commands for a short duration. In such states, an immediate command requeue is expected to fail. The driver may exceed its max retry count, which permanently ends the command in failure when the same command would succeed after waiting for the controller to be ready. NVMe ratified TP 4033 provides a delay hint in the completion status code for failed commands. Implement the retry delay based on the command completion status and the controller's requested delay. Note that requeued commands are handled per request_queue, not per individual request. If multiple commands fail, the controller should consistently report the desired delay time for retryable commands in all CQEs, otherwise the requeue list may be kicked too soon. Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet: expose support for fabrics SQ flow control disable in treqSagi Grimberg
Technical Proposal introduces an indication for SQ flow control disable support. Expose it since we are able to operate in this mode. Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet: don't override treq upon modification.Sagi Grimberg
Only override the allowed parts of it. Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> [hch: slight tweak to the NVME_TREQ_SECURE_CHANNEL_MASK definition] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet: support fabrics sq flow controlSagi Grimberg
Technical proposal 8005 "fabrics SQ flow control" introduces a mode where a host and controller agree to omit sq_head pointer updates when sending nvme completions. In case the host indicated desire to operate in this mode (connect attribute) the controller will return back a connect completion with sq_head value of 0xffff as indication that it will omit sq_head pointer updates. This mode saves us an atomic update in the I/O path. Reviewed-by: Hannes Reinecke <hare@suse.com> [hch: suggested better implementation] Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet-fc: remove the IN_ISR deferred scheduling optionsJames Smart
All target lldd's call the cmd receive and op completions in non-isr thread contexts. As such the IN_ISR options are not necessary. Remove the functionality and flags, which also removes cpu assignments to queues. Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet: add defines for discovery change async eventsJay Sternberg
Add AEN/AER values as defined by the specification Signed-off-by: Jay Sternberg <jay.e.sternberg@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvmet: change aen mask functions to use bit numbersJay Sternberg
Functions nvmet_aen_disabled and nvmet_clear_aen were using values not bit numbers ie 1 << 9 not 9 for bit function clear_bit and test_and_set_bit. Signed-off-by: Jay Sternberg <jay.e.sternberg@intel.com> Reviewed-by: Phil Cayton <phil.cayton@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvme: support traffic based keep-aliveSagi Grimberg
If the controller supports traffic based keep alive, we restart the keep alive timer if any admin or io commands was completed during the kato period. This prevents a possible starvation of keep alive commands in the presence of heavy traffic as in such case, we already have a health indication from the host perspective. Only set a comp_seen indicator in case the controller supports keep alive to minimize the overhead for pci controllers. Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07nvme: introduce ctrl attributes enumerationSagi Grimberg
We are growing more controller attributes, so use a proper enumeration for it. For now just add the 128-bit hostid which we support. Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: put back rcu lock in blkcg_bio_issue_check()Dennis Zhou
I was a little overzealous in removing the rcu_read_lock() call from blkcg_bio_issue_check() and it broke blk-throttle. Put it back. Fixes: e35403a034bf ("blkcg: associate blkg when associating a device") Signed-off-by: Dennis Zhou <dennis@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: rename blkg_try_get() to blkg_tryget()Dennis Zhou
blkg reference counting now uses percpu_ref rather than atomic_t. Let's make this consistent with css_tryget. This renames blkg_try_get to blkg_tryget and now returns a bool rather than the blkg or %NULL. Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: change blkg reference counting to use percpu_refDennis Zhou
Every bio is now associated with a blkg putting blkg_get, blkg_try_get, and blkg_put on the hot path. Switch over the refcnt in blkg to use percpu_ref. Signed-off-by: Dennis Zhou <dennis@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: remove bio_disassociate_task()Dennis Zhou
Now that a bio only holds a blkg reference, so clean up is simply putting back that reference. Remove bio_disassociate_task() as it just calls bio_disassociate_blkg() and call the latter directly. Signed-off-by: Dennis Zhou <dennis@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: remove additional reference to the cssDennis Zhou
The previous patch in this series removed carrying around a pointer to the css in blkg. However, the blkg association logic still relied on taking a reference on the css to ensure we wouldn't fail in getting a reference for the blkg. Here the implicit dependency on the css is removed. The association continues to rely on the tryget logic walking up the blkg tree. This streamlines the three ways that association can happen: normal, swap, and writeback. Signed-off-by: Dennis Zhou <dennis@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: remove bio->bi_css and instead use bio->bi_blkgDennis Zhou
Prior patches ensured that any bio that interacts with a request_queue is properly associated with a blkg. This makes bio->bi_css unnecessary as blkg maintains a reference to blkcg already. This removes the bio field bi_css and transfers corresponding uses to access via bi_blkg. Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: associate writeback bios with a blkgDennis Zhou
One of the goals of this series is to remove a separate reference to the css of the bio. This can and should be accessed via bio_blkcg(). In this patch, wbc_init_bio() now requires a bio to have a device associated with it. Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: associate a blkg for pages being evicted by swapDennis Zhou
A prior patch in this series added blkg association to bios issued by cgroups. There are two other paths that we want to attribute work back to the appropriate cgroup: swap and writeback. Here we modify the way swap tags bios to include the blkg. Writeback will be tackle in the next patch. Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: consolidate bio_issue_init() to be a part of coreDennis Zhou
bio_issue_init among other things initializes the timestamp for an IO. Rather than have this logic handled by policies, this consolidates it to be on the init paths (normal, clone, bounce clone). Signed-off-by: Dennis Zhou <dennis@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Liu Bo <bo.liu@linux.alibaba.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: associate blkg when associating a deviceDennis Zhou
Previously, blkg association was handled by controller specific code in blk-throttle and blk-iolatency. However, because a blkg represents a relationship between a blkcg and a request_queue, it makes sense to keep the blkg->q and bio->bi_disk->queue consistent. This patch moves association into the bio_set_dev macro(). This should cover the majority of cases where the device is set/changed keeping the two pointers consistent. Fallback code is added to blkcg_bio_issue_check() to catch any missing paths. Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: introduce common blkg association logicDennis Zhou
There are 3 ways blkg association can happen: association with the current css, with the page css (swap), or from the wbc css (writeback). This patch handles how association is done for the first case where we are associating bsaed on the current css. If there is already a blkg associated, the css will be reused and association will be redone as the request_queue may have changed. Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: convert blkg_lookup_create() to find closest blkgDennis Zhou
There are several scenarios where blkg_lookup_create() can fail such as the blkcg dying, request_queue is dying, or simply being OOM. Most handle this by simply falling back to the q->root_blkg and calling it a day. This patch implements the notion of closest blkg. During blkg_lookup_create(), if it fails to create, return the closest blkg found or the q->root_blkg. blkg_try_get_closest() is introduced and used during association so a bio is always attached to a blkg. Signed-off-by: Dennis Zhou <dennis@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: update blkg_lookup_create() to do lockingDennis Zhou
To know when to create a blkg, the general pattern is to do a blkg_lookup() and if that fails, lock and do the lookup again, and if that fails finally create. It doesn't make much sense for everyone who wants to do creation to write this themselves. This changes blkg_lookup_create() to do locking and implement this pattern. The old blkg_lookup_create() is renamed to __blkg_lookup_create(). If a call site wants to do its own error handling or already owns the queue lock, they can use __blkg_lookup_create(). This will be used in upcoming patches. Signed-off-by: Dennis Zhou <dennis@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Liu Bo <bo.liu@linux.alibaba.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blkcg: fix ref count issue with bio_blkcg() using task_cssDennis Zhou
The bio_blkcg() function turns out to be inconsistent and consequently dangerous to use. The first part returns a blkcg where a reference is owned by the bio meaning it does not need to be rcu protected. However, the third case, the last line, is problematic: return css_to_blkcg(task_css(current, io_cgrp_id)); This can race against task migration and the cgroup dying. It is also semantically different as it must be called rcu protected and is susceptible to failure when trying to get a reference to it. This patch adds association ahead of calling bio_blkcg() rather than after. This makes association a required and explicit step along the code paths for calling bio_blkcg(). In blk-iolatency, association is moved above the bio_blkcg() call to ensure it will not return %NULL. BFQ uses the old bio_blkcg() function, but I do not want to address it in this series due to the complexity. I have created a private version documenting the inconsistency and noting not to use it. Signed-off-by: Dennis Zhou <dennis@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07blk-mq: remove QUEUE_FLAG_POLL from default MQ flagsJens Axboe
We only support polling if we have poll queues now, but the flag is being set by default. Remove the default QUEUE_FLAG_POLL setting, we'll set it in blk_mq_init_allocated_queue() if we have poll queues available for this device. Fixes: 6544d229bf43 ("block: enable polling by default if a poll map is initalized") Reported-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-07scsi: t10-pi: Return correct ref tag when queue has no integrity profileMartin K. Petersen
Commit ddd0bc756983 ("block: move ref_tag calculation func to the block layer") moved ref tag calculation from SCSI to a library function. However, this change broke returning the correct ref tag for devices operating in DIF mode since these do not have an associated block integrity profile. This in turn caused read/write failures on PI-formatted disks attached to an mpt3sas controller. Fixes: ddd0bc756983 ("block: move ref_tag calculation func to the block layer") Cc: stable@vger.kernel.org # 4.19+ Reported-by: John Garry <john.garry@huawei.com> Tested-by: Xiang Chen <chenxiang66@hisilicon.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-12-07y2038: futex: Add support for __kernel_timespecArnd Bergmann
This prepares sys_futex for y2038 safe calling: the native syscall is changed to receive a __kernel_timespec argument, which will be switched to 64-bit time_t in the future. All the internal time handling gets changed to timespec64, and the compat_sys_futex entry point is moved under the CONFIG_COMPAT_32BIT_TIME check to provide compatibility for existing 32-bit architectures. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-12-07y2038: futex: Move compat implementation into futex.cArnd Bergmann
We are going to share the compat_sys_futex() handler between 64-bit architectures and 32-bit architectures that need to deal with both 32-bit and 64-bit time_t, and this is easier if both entry points are in the same file. In fact, most other system call handlers do the same thing these days, so let's follow the trend here and merge all of futex_compat.c into futex.c. In the process, a few minor changes have to be done to make sure everything still makes sense: handle_futex_death() and futex_cmpxchg_enabled() become local symbol, and the compat version of the fetch_robust_entry() function gets renamed to compat_fetch_robust_entry() to avoid a symbol clash. This is intended as a purely cosmetic patch, no behavior should change. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-12-07bridge: Add br_fdb_clear_offload()Petr Machata
When a driver unoffloads all FDB entries en bloc, it's inefficient to send the switchdev notification one by one. Add a helper that unsets the offload flag on FDB entries on a given bridge port and VLAN. Signed-off-by: Petr Machata <petrm@mellanox.com> Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07Merge branch 'mlx5-packet-credit-fc' into rdma.gitJason Gunthorpe
Danit Goldberg says: Packet based credit mode Packet based credit mode is an alternative end-to-end credit mode for QPs set during their creation. Credits are transported from the responder to the requester to optimize the use of its receive resources. In packet-based credit mode, credits are issued on a per packet basis. The advantage of this feature comes while sending large RDMA messages through switches that are short in memory. The first commit exposes QP creation flag and the HCA capability. The second commit adds support for a new DV QP creation flag. The last commit report packet based credit mode capability via the MLX5DV device capabilities. * branch 'mlx5-packet-credit-fc': IB/mlx5: Report packet based credit mode device capability IB/mlx5: Add packet based credit mode support net/mlx5: Expose packet based credit mode Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-12-07HID: input: use the Resolution Multiplier for high-resolution scrollingPeter Hutterer
Windows uses a magic number of 120 for a wheel click. High-resolution scroll wheels are supposed to use a fraction of 120 to signal smaller scroll steps. This is implemented by the Resolution Multiplier in the device itself. If the multiplier is present in the report descriptor, set it to the logical max and then use the resolution multiplier to calculate the high-resolution events. This is the recommendation by Microsoft, see http://msdn.microsoft.com/en-us/windows/hardware/gg487477.aspx Note that all mice encountered so far have a logical min/max of 0/1, so it's a binary "yes or no" to high-res scrolling anyway. To make userspace simpler, always enable the REL_WHEEL_HI_RES bit. Where the device doesn't support high-resolution scrolling, the value for the high-res data will simply be a multiple of 120 every time. For userspace, if REL_WHEEL_HI_RES is available that is the one to be used. Potential side-effect: a device with a Resolution Multiplier applying to other Input items will have those items set to the logical max as well. This cannot easily be worked around but it is doubtful such devices exist. Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net> Verified-by: Harry Cutts <hcutts@chromium.org> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2018-12-07HID: core: process the Resolution MultiplierPeter Hutterer
The Resolution Multiplier is a feature report that modifies the value of Usages within the same Logical Collection. If the multiplier is set to anything but 1, the hardware reports (value * multiplier) for the same amount of physical movement, i.e. the value we receive in the kernel is pre-multiplied. The hardware may either send a single (value * multiplier), or by sending multiplier as many reports with the same value, or a combination of these two options. For example, when the Microsoft Sculpt Ergonomic mouse Resolution Multiplier is set to 12, the Wheel sends out 12 for every detent but AC Pan sends out a value of 3 at 4 times the frequency. The effective multiplier is based on the physical min/max of the multiplier field, a logical min/max of [0,1] with a physical min/max of [1,8] means the multiplier is either 1 or 8. The Resolution Multiplier was introduced for high-resolution scrolling in Windows Vista and is commonly used on Microsoft mice. The recommendation for the Resolution Multiplier is to default to 1 for backwards compatibility. This patch adds an arbitrary upper limit at 255. The only known use case for the Resolution Multiplier is for scroll wheels where the multiplier has to be a fraction of 120 to work with Windows. Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net> Verified-by: Harry Cutts <hcutts@chromium.org> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2018-12-07HID: core: store the collections as a basic treePeter Hutterer
For each collection parsed, store a pointer to the parent collection (if any). This makes it a lot easier to look up which collection(s) any given item is part of Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net> Verified-by: Harry Cutts <hcutts@chromium.org> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2018-12-07preempt: Move PREEMPT_NEED_RESCHED definition into arch codeWill Deacon
PREEMPT_NEED_RESCHED is never used directly, so move it into the arch code where it can potentially be implemented using either a different bit in the preempt count or as an entirely separate entity. Cc: Robert Love <rml@tech9.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-12-07fs/locks: merge posix_unblock_lock() and locks_delete_block()NeilBrown
posix_unblock_lock() is not specific to posix locks, and behaves nearly identically to locks_delete_block() - the former returning a status while the later doesn't. So discard posix_unblock_lock() and use locks_delete_block() instead, after giving that function an appropriate return value. Signed-off-by: NeilBrown <neilb@suse.com> Reviewed-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Jeff Layton <jlayton@kernel.org>
2018-12-07mtd: spinand: add support for GigaDevice GD5FxGQ4xAChuanhong Guo
Add support for GigaDevice GD5F1G/2G/4GQ4xA SPI NAND. Signed-off-by: Chuanhong Guo <gch981213@gmail.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2018-12-07mtd: rawnand: Deprecate the dummy_controller fieldBoris Brezillon
We try to force NAND controller drivers to properly separate the NAND controller object from the NAND chip one, so let's deprecate the dummy controller object embedded in nand_chip to encourage them to create their own instance. Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2018-12-07mtd: rawnand: Move ->setup_data_interface() to nand_controller_opsBoris Brezillon
->setup_data_interface() is a controller specific method and should thus be placed in nand_controller_ops. In order to make that work with controllers that support keeping pre-configured timings we need to add a new NAND_KEEP_TIMINGS flag to inform the core it should skip the timings selection step. Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Tested-by: Janusz Krzysztofik <jmkrzyszt@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2018-12-07mtd: rawnand: Move the ->exec_op() method to nand_controller_opsBoris Brezillon
->exec_op() is a controller method and has nothing to do in the nand_chip struct. Let's move it to the nand_controller_ops struct and adjust the core and drivers accordingly. Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Tested-by: Janusz Krzysztofik <jmkrzyszt@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2018-12-07mtd: rawnand: Deprecate the ->select_chip() hookBoris Brezillon
Now that the CS line to be selected is passed to ->exec_op() and stored in chip->cur_cs and after patching all drivers implementing ->exec_op() to stop implementing this method, we can deprecate it by moving it to the nand_legacy structure. Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Tested-by: Janusz Krzysztofik <jmkrzyszt@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2018-12-07mtd: rawnand: Pass the CS line to be selected in struct nand_operationBoris Brezillon
In order to deprecate the ->select_chip hook we need to pass the CS line a NAND operations are targeting. This is done through the addition of a cs field to the nand_operation struct. We also need to keep track of the currently selected target to properly initialize op->cs, hence the ->cur_cs field addition to the nand_chip struct. Note that op->cs is not assigned in nand_exec_op() because we might rework the way we execute NAND operations in the future (adopt a queuing mechanism instead of the serialization we have right now). Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Tested-by: Janusz Krzysztofik <jmkrzyszt@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2018-12-07mtd: rawnand: Add nand_[de]select_target() helpersBoris Brezillon
Add a wrapper to prevent drivers and core code from directly calling the ->select_chip hook which we are about to deprecate. Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Tested-by: Janusz Krzysztofik <jmkrzyszt@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2018-12-07mtd: rawnand: Remove unused NAND_CONTROLLER_ALLOC flagBoris Brezillon
Looks like NAND_CONTROLLER_ALLOC has been introduced a long time ago back when the dummy nand_hw_ctrl object was dynamically allocated instead of being embedded in nand_chip. We can safely get rid of this unused flag. Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Tested-by: Janusz Krzysztofik <jmkrzyszt@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>