summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-04-10s390/zcrypt: fix possible deadlock situation on ap queue removeHarald Freudenberger
With commit 01396a374c3d ("s390/zcrypt: revisit ap device remove procedure") the ap queue remove is now a two stage process. However, a del_timer_sync() call may trigger the timer function which may try to lock the very same spinlock as is held by the function just initiating the del_timer_sync() call. This could end up in a deadlock situation. Very unlikely but possible as you need to remove an ap queue at the exact sime time when a timeout of a request occurs. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reported-by: Pierre Morel <pmorel@linux.ibm.com> Fixes: commit 01396a374c3d ("s390/zcrypt: revisit ap device remove procedure") Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-10s390/3270: fix lockdep false positive on view->lockMartin Schwidefsky
The spinlock in the raw3270_view structure is used by con3270, tty3270 and fs3270 in different ways. For con3270 the lock can be acquired in irq context, for tty3270 and fs3270 the highest context is bh. Lockdep sees the view->lock as a single class and if the 3270 driver is used for the console the following message is generated: WARNING: inconsistent lock state 5.1.0-rc3-05157-g5c168033979d #12 Not tainted -------------------------------- inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage. swapper/0/1 [HC0[0]:SC1[1]:HE1:SE0] takes: (____ptrval____) (&(&view->lock)->rlock){?.-.}, at: tty3270_update+0x7c/0x330 Introduce a lockdep subclass for the view lock to distinguish bh from irq locks. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-10scsi: virtio_scsi: limit number of hw queues by nr_cpu_idsDongli Zhang
When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-scsi, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num_queues' specified by qemu is more than maxcpus, virtio-scsi would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-scsi by nr_cpu_ids. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10virtio-blk: limit number of hw queues by nr_cpu_idsDongli Zhang
When tag_set->nr_maps is 1, the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-blk by nr_cpu_ids. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10block, bfq: fix use after free in bfq_bfqq_expirePaolo Valente
The function bfq_bfqq_expire() invokes the function __bfq_bfqq_expire(), and the latter may free the in-service bfq-queue. If this happens, then no other instruction of bfq_bfqq_expire() must be executed, or a use-after-free will occur. Basing on the assumption that __bfq_bfqq_expire() invokes bfq_put_queue() on the in-service bfq-queue exactly once, the queue is assumed to be freed if its refcounter is equal to one right before invoking __bfq_bfqq_expire(). But, since commit 9dee8b3b057e ("block, bfq: fix queue removal from weights tree") this assumption is false. __bfq_bfqq_expire() may also invoke bfq_weights_tree_remove() and, since commit 9dee8b3b057e ("block, bfq: fix queue removal from weights tree"), also the latter function may invoke bfq_put_queue(). So __bfq_bfqq_expire() may invoke bfq_put_queue() twice, and this is the actual case where the in-service queue may happen to be freed. To address this issue, this commit moves the check on the refcounter of the queue right around the last bfq_put_queue() that may be invoked on the queue. Fixes: 9dee8b3b057e ("block, bfq: fix queue removal from weights tree") Reported-by: Dmitrii Tcvetkov <demfloro@demfloro.ru> Reported-by: Douglas Anderson <dianders@chromium.org> Tested-by: Dmitrii Tcvetkov <demfloro@demfloro.ru> Tested-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-10ALSA: hda: Fix racy display power accessTakashi Iwai
snd_hdac_display_power() doesn't handle the concurrent calls carefully enough, and it may lead to the doubly get_power or put_power calls, when a runtime PM and an async work get called in racy way. This patch addresses it by reusing the bus->lock mutex that has been used for protecting the link state change in ext bus code, so that it can protect against racy display state changes. The initialization of bus->lock was moved from snd_hdac_ext_bus_init() to snd_hdac_bus_init() as well accordingly. Testcase: igt/i915_pm_rpm/module-reload #glk-dsi Reported-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-04-10alarmtimer: Return correct remaining timeAndrei Vagin
To calculate a remaining time, it's required to subtract the current time from the expiration time. In alarm_timer_remaining() the arguments of ktime_sub are swapped. Fixes: d653d8457c76 ("alarmtimer: Implement remaining callback") Signed-off-by: Andrei Vagin <avagin@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Cc: Stephen Boyd <sboyd@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190408041542.26338-1-avagin@gmail.com
2019-04-10mac80211: fix RX STBC override byte orderJohannes Berg
The original patch neglected to take byte order conversions into account, fix that. Fixes: d9bb410888ce ("mac80211: allow overriding HT STBC capabilities") Signed-off-by: Johannes Berg <johannes.berg@intel.com> Reviewed-by: Sergey Matyukevich <sergey.matyukevich.os@quantenna.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-04-10locking/lockdep: Zap lock classes even with lock debugging disabledBart Van Assche
The following commit: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use") changed the behavior of lockdep_free_key_range() from unconditionally zapping lock classes into only zapping lock classes if debug_lock == true. Not zapping lock classes if debug_lock == false leaves dangling pointers in several lockdep datastructures, e.g. lock_class::name in the all_lock_classes list. The shell command "cat /proc/lockdep" causes the kernel to iterate the all_lock_classes list. Hence the "unable to handle kernel paging request" cash that Shenghui encountered by running cat /proc/lockdep. Since the new behavior can cause cat /proc/lockdep to crash, restore the pre-v5.1 behavior. This patch avoids that cat /proc/lockdep triggers the following crash with debug_lock == false: BUG: unable to handle kernel paging request at fffffbfff40ca448 RIP: 0010:__asan_load1+0x28/0x50 Call Trace: string+0xac/0x180 vsnprintf+0x23e/0x820 seq_vprintf+0x82/0xc0 seq_printf+0x92/0xb0 print_name+0x34/0xb0 l_show+0x184/0x200 seq_read+0x59e/0x6c0 proc_reg_read+0x11f/0x170 __vfs_read+0x4d/0x90 vfs_read+0xc5/0x1f0 ksys_read+0xab/0x130 __x64_sys_read+0x43/0x50 do_syscall_64+0x71/0x210 entry_SYSCALL_64_after_hwframe+0x49/0xbe Reported-by: shenghui <shhuiw@foxmail.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Fixes: a0b0fd53e1e6 ("locking/lockdep: Free lock classes that are no longer in use") # v5.1-rc1. Link: https://lkml.kernel.org/r/20190403233552.124673-1-bvanassche@acm.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10ASoC: pcm: fix error handling when try_module_get() fails.Ranjani Sridharan
Handle error before returning when try_module_get() fails to prevent inconsistent mutex lock/unlock. Fixes: 52034add7 (ASoC: pcm: update module refcount if module_get_upon_open is set) Signed-off-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-10apparmor: Restore Y/N in /sys for apparmor's "enabled"Kees Cook
Before commit c5459b829b71 ("LSM: Plumb visibility into optional "enabled" state"), /sys/module/apparmor/parameters/enabled would show "Y" or "N" since it was using the "bool" handler. After being changed to "int", this switched to "1" or "0", breaking the userspace AppArmor detection of dbus-broker. This restores the Y/N output while keeping the LSM infrastructure happy. Before: $ cat /sys/module/apparmor/parameters/enabled 1 After: $ cat /sys/module/apparmor/parameters/enabled Y Reported-by: David Rheinsberg <david.rheinsberg@gmail.com> Reviewed-by: David Rheinsberg <david.rheinsberg@gmail.com> Link: https://lkml.kernel.org/r/CADyDSO6k8vYb1eryT4g6+EHrLCvb68GAbHVWuULkYjcZcYNhhw@mail.gmail.com Fixes: c5459b829b71 ("LSM: Plumb visibility into optional "enabled" state") Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: John Johansen <john.johansen@canonical.com>
2019-04-10ASoC: stm32: sai: fix master clock managementOlivier Moysan
When master clock is used, master clock rate is set exclusively. Parent clocks of master clock cannot be changed after a call to clk_set_rate_exclusive(). So the parent clock of SAI kernel clock must be set before. Ensure also that exclusive rate operations are balanced in STM32 SAI driver. Signed-off-by: Olivier Moysan <olivier.moysan@st.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-10ASoC: Intel: kbl: fix wrong number of channelsTzung-Bi Shih
Fix wrong setting on number of channels. The context wants to set constraint to 2 channels instead of 4. Signed-off-by: Tzung-Bi Shih <tzungbi@google.com> Acked-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-10x86/perf/amd: Remove need to check "running" bit in NMI handlerLendacky, Thomas
Spurious interrupt support was added to perf in the following commit, almost a decade ago: 63e6be6d98e1 ("perf, x86: Catch spurious interrupts after disabling counters") The two previous patches (resolving the race condition when disabling a PMC and NMI latency mitigation) allow for the removal of this older spurious interrupt support. Currently in x86_pmu_stop(), the bit for the PMC in the active_mask bitmap is cleared before disabling the PMC, which sets up a race condition. This race condition was mitigated by introducing the running bitmap. That race condition can be eliminated by first disabling the PMC, waiting for PMC reset on overflow and then clearing the bit for the PMC in the active_mask bitmap. The NMI handler will not re-enable a disabled counter. If x86_pmu_stop() is called from the perf NMI handler, the NMI latency mitigation support will guard against any unhandled NMI messages. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> # 4.14.x- Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: https://lkml.kernel.org/r/Message-ID: Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10MAINTAINERS: Fix the I3C entryBoris Brezillon
There's no include/dt-bindings/i3c/ directory, remove this F: entry from the I3C file patterns. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Joe Perches <joe@perches.com> Reported-by: Joe Perches <joe@perches.com> Fixes: 4f26d0666961 ("MAINTAINERS: Add myself as the I3C subsystem maintainer") Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
2019-04-10i3c: dw: Fix dw_i3c_master_disable controller by using correct maskVitor Soares
The controller was being disabled incorrectly. The correct way is to clear the DEV_CTRL_ENABLE bit. Fix this by clearing this bit. Cc: Boris Brezillon <bbrezillon@kernel.org> Cc: <stable@vger.kernel.org> Fixes: 1dd728f5d4d4 ("i3c: master: Add driver for Synopsys DesignWare IP") Signed-off-by: Vitor Soares <vitor.soares@synopsys.com> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
2019-04-10i3c: Fix the verification of random PIDVitor Soares
The validation of random PID should be done by checking the boardinfo->pid instead of info.pid which is empty. Doing the change the info struture declaration is no longer necessary. Cc: Boris Brezillon <bbrezillon@kernel.org> Cc: <stable@vger.kernel.org> Fixes: 3a379bbcea0a ("i3c: Add core I3C infrastructure") Signed-off-by: Vitor Soares <vitor.soares@synopsys.com> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
2019-04-10locking/rwsem: Optimize rwsem structure for uncontended lock acquisitionWaiman Long
For an uncontended rwsem, count and owner are the only fields a task needs to touch when acquiring the rwsem. So they are put next to each other to increase the chance that they will share the same cacheline. On a ThunderX2 99xx (arm64) system with 32K L1 cache and 256K L2 cache, a rwsem locking microbenchmark with one locking thread was run to write-lock and write-unlock an array of rwsems separated 2 cachelines apart in a 1M byte memory block. The locking rates (kops/s) of the microbenchmark when the rwsems are at various "long" (8-byte) offsets from beginning of the cacheline before and after the patch were as follows: Cacheline Offset Pre-patch Post-patch ---------------- --------- ---------- 0 17,449 16,588 1 17,450 16,465 2 17,450 16,460 3 17,453 16,462 4 14,867 16,471 5 14,867 16,470 6 14,853 16,464 7 14,867 13,172 Before the patch, the count and owner are 4 "long"s apart. After the patch, they are only 1 "long" apart. The rwsem data have to be loaded from the L3 cache for each access. It can be seen that the locking rates are more consistent after the patch than before. Note that for this particular system, the performance drop happens whenever the count and owner are at an odd multiples of "long"s apart. No performance drop was observed when only a single rwsem was used (hot cache). So the drop is likely just an idiosyncrasy of the cache architecture of this chip than an inherent problem with the patch. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-12-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/rwsem: Enable lock event countingWaiman Long
Add lock event counting calls so that we can track the number of lock events happening in the rwsem code. With CONFIG_LOCK_EVENT_COUNTS on and booting a 4-socket 112-thread x86-64 system, the rwsem counts after system bootup were as follows: rwsem_opt_fail=261 rwsem_opt_wlock=50636 rwsem_rlock=445 rwsem_rlock_fail=0 rwsem_rlock_fast=22 rwsem_rtrylock=810144 rwsem_sleep_reader=441 rwsem_sleep_writer=310 rwsem_wake_reader=355 rwsem_wake_writer=2335 rwsem_wlock=261 rwsem_wlock_fail=0 rwsem_wtrylock=20583 It can be seen that most of the lock acquisitions in the slowpath were write-locks in the optimistic spinning code path with no sleeping at all. For this system, over 97% of the locks are acquired via optimistic spinning. It illustrates the importance of optimistic spinning in improving the performance of rwsem. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-11-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/lock_events: Don't show pvqspinlock events on bare metalWaiman Long
On bare metal, the pvqspinlock event counts will always be 0. So there is no point in showing their corresponding debugfs files. So they are skipped in this case. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-10-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/lock_events: Make lock_events available for all archs & other locksWaiman Long
The QUEUED_LOCK_STAT option to report queued spinlocks event counts was previously allowed only on x86 architecture. To make the locking event counting code more useful, it is now renamed to a more generic LOCK_EVENT_COUNTS config option. This new option will be available to all the architectures that use qspinlock at the moment. Other locking code can now start to use the generic locking event counting code by including lock_events.h and put the new locking event names into the lock_events_list.h header file. My experience with lock event counting is that it gives valuable insight on how the locking code works and what can be done to make it better. I would like to extend this benefit to other locking code like mutex and rwsem in the near future. The PV qspinlock specific code will stay in qspinlock_stat.h. The locking event counters will now reside in the <debugfs>/lock_event_counts directory. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-9-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/qspinlock_stat: Introduce generic lockevent_*() counting APIsWaiman Long
The percpu event counts used by qspinlock code can be useful for other locking code as well. So a new set of lockevent_* counting APIs is introduced with the lock event names extracted out into the new lock_events_list.h header file for easier addition in the future. The existing qstat_inc() calls are replaced by either lockevent_inc() or lockevent_cond_inc() calls. The qstat_hop() call is renamed to lockevent_pv_hop(). The "reset_counters" debugfs file is also renamed to ".reset_counts". Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-8-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macroWaiman Long
Currently, the DEBUG_RWSEMS_WARN_ON() macro just dumps a stack trace when the rwsem isn't in the right state. It does not show the actual states of the rwsem. This may not be that helpful in the debugging process. Enhance the DEBUG_RWSEMS_WARN_ON() macro to also show the current content of the rwsem count and owner fields to give more information about what is wrong with the rwsem. The debug_locks_off() function is called as is done inside DEBUG_LOCKS_WARN_ON(). Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-7-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/rwsem: Add debug check for __down_read*()Waiman Long
When rwsem_down_read_failed*() return, the read lock is acquired indirectly by others. So debug checks are added in __down_read() and __down_read_killable() to make sure the rwsem is really reader-owned. The other debug check calls in kernel/locking/rwsem.c except the one in up_read_non_owner() are also moved over to rwsem-xadd.h. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-6-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()Waiman Long
The atomic_long_cmpxchg_acquire() in rwsem_try_read_lock_unqueued() is replaced by atomic_long_try_cmpxchg_acquire() to simpify the code and generate slightly better assembly code. There is no functional change. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Link: http://lkml.kernel.org/r/20190404174320.22416-5-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/rwsem: Move rwsem internal function declarations to rwsem-xadd.hWaiman Long
We don't need to expose rwsem internal functions which are not supposed to be called directly from other kernel code. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Link: http://lkml.kernel.org/r/20190404174320.22416-4-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/rwsem: Move owner setting code from rwsem.c to rwsem.hWaiman Long
Move all the owner setting code closer to the rwsem-xadd fast paths directly within rwsem.h file as well as in the slowpaths where owner setting is done after acquring the lock. This will enable us to add DEBUG_RWSEMS check in a later patch to make sure that read lock is really acquired when rwsem_down_read_failed() returns, for instance. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/20190404174320.22416-3-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10locking/rwsem: Relocate rwsem_down_read_failed()Waiman Long
The rwsem_down_read_failed*() functions were relocated from above the optimistic spinning section to below that section. This enables the reader functions to use optimisitic spinning in future patches. There is no code change. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Link: http://lkml.kernel.org/r/20190404174320.22416-2-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10x86/Kconfig: Remove the unused X86_DMA_REMAP KConfig symbolChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20190410080220.21705-2-hch@lst.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10PM / core: Introduce dpm_async_fn() helperYangtao Li
When we want to execute device pm functions asynchronously, we'll do the following for the device: 1) reinit_completion(&dev->power.completion); 2) Check if the device enables asynchronous suspend. 3) If necessary, execute the corresponding function asynchronously. There are a lot of such repeated operations here, in fact we can avoid this. So introduce dpm_async_fn() to have better code readability and reuse. And use this function to do some cleanup. Signed-off-by: Yangtao Li <tiny.windzz@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-04-10cpufreq: boost: Remove CONFIG_CPU_FREQ_BOOST_SW Kconfig optionYue Hu
Commit 2fb4719b2560 ("cpufreq / boost: Kconfig: Support for software-managed BOOST") added the CONFIG_CPU_FREQ_BOOST_SW config. However EXYNOS based cpufreq drivers have been removed because of switching to cpufreq-dt driver which will set boost-attr if required. So, let's remove this option and update cpufreq_generic_attr[]. Signed-off-by: Yue Hu <huyue2@yulong.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-04-10x86/asm: Modernize sync_bitops.hJan Beulich
Add missing instruction suffixes and use rmwcc.h just like was (more or less) recently done for bitops.h as well, see: 22636f8c9511: x86/asm: Add instruction suffixes to bitops 288e4521f0f6: x86/asm: 'Simplify' GEN_*_RMWcc() macros No change in functionality intended. Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/5C9B93870200007800222289@prv1-mh.provo.novell.com [ Cleaned up the changelog a bit. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10sched/topology: Skip duplicate group rewrites in build_sched_groups()Valentin Schneider
While staring at build_sched_domains(), I realized that get_group() does several duplicate (thus useless) writes. If you take the Arm Juno r0 (LITTLEs = [0, 3, 4, 5], bigs = [1, 2]), the sched_group build flow would look like this: ('MC[cpu]->sg' means 'per_cpu_ptr(&tl->data->sg, cpu)' with 'tl == MC') build_sched_groups(MC[CPU0]->sd, CPU0) get_group(0) -> MC[CPU0]->sg get_group(3) -> MC[CPU3]->sg get_group(4) -> MC[CPU4]->sg get_group(5) -> MC[CPU5]->sg build_sched_groups(DIE[CPU0]->sd, CPU0) get_group(0) -> DIE[CPU0]->sg get_group(1) -> DIE[CPU1]->sg <=================+ | build_sched_groups(MC[CPU1]->sd, CPU1) | get_group(1) -> MC[CPU1]->sg | get_group(2) -> MC[CPU2]->sg | | build_sched_groups(DIE[CPU1]->sd, CPU1) ^ get_group(1) -> DIE[CPU1]->sg } We've set up these two up here! get_group(3) -> DIE[CPU0]->sg } From this point on, we will only use sched_groups that have been previously visited & initialized. The only new operation will be which group pointer we affect to sd->groups. On the Juno r0 we get 32 get_group() calls, every single one of them writing to a sched_group->cpumask. However, all of the data structures we need are set up after 8 visits (see above). Return early from get_group() if we've already visited (and thus initialized) the sched_group we're looking at. Overlapping domains are not affected as they do not use build_sched_groups(). Tested on a Juno and a 2 * (Xeon E5-2690) system. ( FWIW I initially checked the refs for both sg && sg->sgc, but figured if they weren't both 0 or > 1 then something must have gone wrong, so I threw in a WARN_ON(). ) No change in functionality intended. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10sched/topology: Fix build_sched_groups() commentValentin Schneider
The comment was introduced (pre 2.6.12) by: 8a7a2318dc07 ("[PATCH] sched: consolidate sched domains") and referred to sched_group->cpu_power. This was folded into sched_group->sched_group_power in commit 9c3f75cbd144 ("sched: Break out cpu_power from the sched_group structure") The comment was then updated in: ced549fa5fc1 ("sched: Remove remaining dubious usage of "power"") but should have replaced "sg->cpu_capacity" with "sg->sched_group_capacity". Do that now. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Cc: Dietmar.Eggemann@arm.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: morten.rasmussen@arm.com Cc: qais.yousef@arm.com Link: http://lkml.kernel.org/r/20190409173546.4747-3-valentin.schneider@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10Merge branch 'lkmm-for-mingo' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into locking/core Pull various memory-model (LKMM) updates from Paul E. McKenney. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10Merge branch 'linus' into locking/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-10OPP: Introduce dev_pm_opp_find_freq_ceil_by_volt()Andrew-sh.Cheng
This patch introduces a new helper routine in the OPP core, which returns the OPP with the highest frequency which has voltage less than or equal to the target voltage passed to the helper. Signed-off-by: Andrew-sh.Cheng <andrew-sh.cheng@mediatek.com> [ Viresh: Massaged the commit log and renamed the helper with some cleanups. ] Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2019-04-10powerpc/mm: Define MAX_PHYSMEM_BITS for all 64-bit configsMichael Ellerman
The recent commit 8bc086899816 ("powerpc/mm: Only define MAX_PHYSMEM_BITS in SPARSEMEM configurations") removed our definition of MAX_PHYSMEM_BITS when SPARSEMEM is disabled. This inadvertently broke some 64-bit FLATMEM using configs with eg: arch/powerpc/include/asm/book3s/64/mmu-hash.h:584:6: error: "MAX_PHYSMEM_BITS" is not defined, evaluates to 0 #if (MAX_PHYSMEM_BITS > MAX_EA_BITS_PER_CONTEXT) ^~~~~~~~~~~~~~~~ Fix it by making sure we define MAX_PHYSMEM_BITS for all 64-bit configs regardless of SPARSEMEM. Fixes: 8bc086899816 ("powerpc/mm: Only define MAX_PHYSMEM_BITS in SPARSEMEM configurations") Reported-by: Andreas Schwab <schwab@linux-m68k.org> Reported-by: Hugh Dickins <hughd@google.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-04-09Bluetooth: btusb: request wake pin with NOAUTOENBrian Norris
Badly-designed systems might have (for example) active-high wake pins that default to high (e.g., because of external pull ups) until they have an active firmware which starts driving it low. This can cause an interrupt storm in the time between request_irq() and disable_irq(). We don't support shared interrupts here, so let's just pre-configure the interrupt to avoid auto-enabling it. Fixes: fd913ef7ce61 ("Bluetooth: btusb: Add out-of-band wakeup support") Fixes: 5364a0b4f4be ("arm64: dts: rockchip: move QCA6174A wakeup pin into its USB node") Signed-off-by: Brian Norris <briannorris@chromium.org> Reviewed-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-04-09dt-bindings: clock: sifive: add FU540-C000 PRCI clock constantsPaul Walmsley
Add preprocessor macros for the important PRCI output clocks that are needed by both the FU540 PRCI driver and DT data. Details are available in the FU540 manual in Chapter 7 of https://static.dev.sifive.com/FU540-C000-v1.0.pdf Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
2019-04-09Merge tag 'mips_fixes_5.1_2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: "A few minor MIPS fixes: - Provide struct pt_regs * from get_irq_regs() to kgdb_nmicallback() when handling an IPI triggered by kgdb_roundup_cpus(), matching the behavior of other architectures & resolving kgdb issues for SMP systems. - Defer a pointer dereference until after a NULL check in the irq_shutdown callback for SGI IP27 HUB interrupts. - A defconfig update for the MSCC Ocelot to enable some necessary drivers" * tag 'mips_fixes_5.1_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: generic: Add switchdev, pinctrl and fit to ocelot_defconfig MIPS: SGI-IP27: Fix use of unchecked pointer in shutdown_bridge_irq MIPS: KGDB: fix kgdb support for SMP platforms.
2019-04-09Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds
Pull misc fixes from Al Viro: "A few regression fixes from this cycle" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: aio: use kmem_cache_free() instead of kfree() iov_iter: Fix build error without CONFIG_CRYPTO aio: Fix an error code in __io_submit_one()
2019-04-10power: supply: cpcap-battery: Fix division by zeroTony Lindgren
If called fast enough so samples do not increment, we can get division by zero in kernel: __div0 cpcap_battery_cc_raw_div cpcap_battery_get_property power_supply_get_property.part.1 power_supply_get_property power_supply_show_property power_supply_uevent Fixes: 874b2adbed12 ("power: supply: cpcap-battery: Add a battery driver") Signed-off-by: Tony Lindgren <tony@atomide.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
2019-04-10cpuidle: Export the next timer expiration for CPUsUlf Hansson
To be able to predict the sleep duration for a CPU entering idle, it is essential to know the expiration time of the next timer. Both the teo and the menu cpuidle governors already use this information for CPU idle state selection. Moving forward, a similar prediction needs to be made for a group of idle CPUs rather than for a single one and the following changes implement a new genpd governor for that purpose. In order to support that feature, add a new function called tick_nohz_get_next_hrtimer() that will return the next hrtimer expiration time of a given CPU to be invoked after deciding whether or not to stop the scheduler tick on that CPU. Make the cpuidle core call tick_nohz_get_next_hrtimer() right before invoking the ->enter() callback provided by the cpuidle driver for the given state and store its return value in the per-CPU struct cpuidle_device, so as to make it available to code outside of cpuidle. Note that at the point when cpuidle calls tick_nohz_get_next_hrtimer(), the governor's ->select() callback has already returned and indicated whether or not the tick should be stopped, so in fact the value returned by tick_nohz_get_next_hrtimer() always is the next hrtimer expiration time for the given CPU, possibly including the tick (if it hasn't been stopped). Co-developed-by: Lina Iyer <lina.iyer@linaro.org> Co-developed-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> [ rjw: Subject & changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-04-10PM / Domains: Add support for CPU devices to genpdUlf Hansson
To enable a CPU device to be attached to a PM domain managed by genpd, make a few changes to it for convenience. To be able to quickly find out what CPUs are attached to a genpd, which typically becomes useful from a genpd governor as subsequent changes are about to show, add a cpumask to struct generic_pm_domain to be updated when a CPU device gets attached to the genpd containing that cpumask. Also, propagate the cpumask changes upwards in the domain hierarchy to the master PM domains. This way, the cpumask for a genpd hierarchically reflects all CPUs attached to the topology below it. Finally, make this an opt-in feature, to avoid having to manage CPUs and the cpumask for a genpd that don't need it. To that end, add a new genpd configuration bit, GENPD_FLAG_CPU_DOMAIN. Co-developed-by: Lina Iyer <lina.iyer@linaro.org> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> [ rjw: Changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-04-10PM / Domains: Add generic data pointer to struct genpd_power_stateUlf Hansson
Add a data pointer to the genpd_power_state struct, to allow a genpd backend driver to store per-state specific data. To introduce the pointer, change the way genpd deals with freeing of the corresponding allocated data. More precisely, clarify the responsibility of whom that shall free the data, by adding a ->free_states() callback to the generic_pm_domain structure. The one allocating the data will be expected to set the callback, to allow genpd to invoke it from genpd_remove(). Co-developed-by: Lina Iyer <lina.iyer@linaro.org> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> [ rjw: Subject & changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-04-09net/mlx5e: Switch to Toeplitz RSS hash by defaultKonstantin Khlebnikov
Although XOR hash function can perform very well on some special use cases, to align with all drivers, mlx5 driver should use Toeplitz hash by default. Toeplitz is more stable for the general use case and it is more standard and reliable. On top of that, since XOR (MLX5_RX_HASH_FN_INVERTED_XOR8) gives only a repeated 8 bits pattern. When used for udp tunneling RSS source port manipulation it results in fixed source port, which will cause bad RSS spread. Fixes: 2be6967cdbc9 ("net/mlx5e: Support ETH_RSS_HASH_XOR") Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Moshe Shemesh <moshe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-04-09Revert "net/mlx5e: Enable reporting checksum unnecessary also for L3 packets"Or Gerlitz
This reverts commit b820e6fb0978f9c2ac438c199d2bb2f35950e9c9. Prior the commit we are reverting, checksum unnecessary was only set when both the L3 OK and L4 OK bits are set on the CQE. This caused packets of IP protocols such as SCTP which are not dealt by the current HW L4 parser (hence the L4 OK bit is not set, but the L4 header type none bit is set) to go through the checksum none code, where currently we wrongly report checksum unnecessary for them, a regression. Fix this by a revert. Note that on our usual track we report checksum complete, so the revert isn't expected to have any notable performance impact. Also, when we are not on the checksum complete track, the L4 protocols for which we report checksum none are not high performance ones, we will still report checksum unnecessary for UDP/TCP. Fixes: b820e6fb0978 ("net/mlx5e: Enable reporting checksum unnecessary also for L3 packets") Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Reported-by: Avi Urman <aviu@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-04-09net/mlx5e: Protect against non-uplink representor for encapDmytro Linkin
TC encap offload is supported only for the physical uplink representor. Fail for non uplink representor. Fixes: 3e621b19b0bb ("net/mlx5e: Support TC encapsulation offloads with upper devices") Signed-off-by: Dmytro Linkin <dmitrolin@mellanox.com> Reviewed-by: Eli Britstein <elibr@mellanox.com> Reviewed-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-04-09net/mlx5e: Rx, Check ip headers sanitySaeed Mahameed
In the two places is_last_ethertype_ip is being called, the caller will be looking inside the ip header, to be safe, add ip{4,6} header sanity check. And return true only on valid ip headers, i.e: the whole header is contained in the linear part of the skb. Note: Such situation is very rare and hard to reproduce, since mlx5e allocates a large enough headroom to contain the largest header one can imagine. Fixes: fe1dc069990c ("net/mlx5e: don't set CHECKSUM_COMPLETE on SCTP packets") Reported-by: Cong Wang <xiyou.wangcong@gmail.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>