Age | Commit message (Collapse) | Author |
|
snd_soc_jack APIs are modified in recent kernel versions. This patch
updates the codec driver to resolve the compilation errors related to
jack framework.
Signed-off-by: Vaibhav Agarwal <vaibhav.sr@gmail.com>
Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com>
Link: https://lore.kernel.org/r/896b8e24d990f2bca5aafaebd26e37095042951e.1594290158.git.vaibhav.sr@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Some of the mei device heads are not generic and have
a specific purpose, we need to announce it to the user space
so it is possible to detect the correct device node via
matching attributes.
Generic heads are marked as 'mei' while special purpose heads
have their own names. Currently we are adding 'itouch' string
for Intel IPTS 1.0, 2.0 devices.
This is done via new sysfs attribute 'kind'.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Link: https://lore.kernel.org/r/20200728192242.3117779-1-tomas.winkler@intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. A plain seqcount_t does not
contain the information of which lock must be held when entering a write
side critical section.
Use the new seqcount_spinlock_t data type, which allows to associate a
spinlock with the sequence counter. This enables lockdep to verify that
the spinlock used for writer serialization is held when the write side
critical section is entered.
If lockdep is disabled this lock association is compiled out and has
neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Song Liu <song@kernel.org>
Link: https://lkml.kernel.org/r/20200720155530.1173732-20-a.darwish@linutronix.de
|
|
A sequence counter write side critical section must be protected by some
form of locking to serialize writers. If the serialization primitive is
not disabling preemption implicitly, preemption has to be explicitly
disabled before entering the sequence counter write side critical
section.
The dma-buf reservation subsystem uses plain sequence counters to manage
updates to reservations. Writer serialization is accomplished through a
wound/wait mutex.
Acquiring a wound/wait mutex does not disable preemption, so this needs
to be done manually before and after the write side critical section.
Use the newly-added seqcount_ww_mutex_t instead:
- It associates the ww_mutex with the sequence count, which enables
lockdep to validate that the write side critical section is properly
serialized.
- It removes the need to explicitly add preempt_disable/enable()
around the write side critical section because the write_begin/end()
functions for this new data type automatically do this.
If lockdep is disabled this ww_mutex lock association is compiled out
and has neither storage size nor runtime overhead.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://lkml.kernel.org/r/20200720155530.1173732-13-a.darwish@linutronix.de
|
|
Commit 3c3b177a9369 ("reservation: add support for read-only access
using rcu") introduced a sequence counter to manage updates to
reservations. Back then, the reservation object initializer
reservation_object_init() was always inlined.
Having the sequence counter initialization inlined meant that each of
the call sites would have a different lockdep class key, which would've
broken lockdep's deadlock detection. The aforementioned commit thus
introduced, and exported, a custom seqcount lockdep class key and name.
The commit 8735f16803f00 ("dma-buf: cleanup reservation_object_init...")
transformed the reservation object initializer to a normal non-inlined C
function. seqcount_init(), which automatically defines the seqcount
lockdep class key and must be called non-inlined, can now be safely used.
Remove the seqcount custom lockdep class key, name, and export. Use
seqcount_init() inside the dma reservation object initializer.
Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://lkml.kernel.org/r/20200720155530.1173732-12-a.darwish@linutronix.de
|
|
|
|
|
|
This patch addresses warnings and errors from the kernel doc scripts for
the OpenCAPI driver.
It also makes minor tweaks to make the docs more consistent.
Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Andrew Donnellan <ajd@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200415012343.919255-3-alastair@d-silva.org
|
|
SPI common code does not support using CS discontinuously for now.
However, i.MX8DXL-EVK only uses CS1 without CS0. Therefore, add a flag
is_only_cs1 to set the correct TCR[PCS].
Signed-off-by: Clark Wang <xiaoning.wang@nxp.com>
Link: https://lore.kernel.org/r/20200727031448.31661-4-xiaoning.wang@nxp.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
The cs-gpio is initailized by spi_get_gpio_descs() now. Remove the
chipselect.
Signed-off-by: Clark Wang <xiaoning.wang@nxp.com>
Link: https://lore.kernel.org/r/20200727031448.31661-3-xiaoning.wang@nxp.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Calling devm_spi_register_controller() too early will cause problem.
When probe failed occurs after calling devm_spi_register_controller(),
the call of spi_controller_put() will trigger the following warning dump.
[ 2.092138] ------------[ cut here ]------------
[ 2.096876] kernfs: can not remove 'uevent', no directory
[ 2.102440] WARNING: CPU: 0 PID: 181 at fs/kernfs/dir.c:1503 kernfs_remove_by_name_ns+0xa0/0xb0
[ 2.111142] Modules linked in:
[ 2.114207] CPU: 0 PID: 181 Comm: kworker/0:7 Not tainted 5.4.24-05024-g775c6e8a738c-dirty #1314
[ 2.122991] Hardware name: Freescale i.MX8DXL EVK (DT)
[ 2.128141] Workqueue: events deferred_probe_work_func
[ 2.133281] pstate: 60000005 (nZCv daif -PAN -UAO)
[ 2.138076] pc : kernfs_remove_by_name_ns+0xa0/0xb0
[ 2.142958] lr : kernfs_remove_by_name_ns+0xa0/0xb0
[ 2.147837] sp : ffff8000122bba70
[ 2.151145] x29: ffff8000122bba70 x28: ffff8000119d6000
[ 2.156462] x27: 0000000000000000 x26: ffff800011edbce8
[ 2.161779] x25: 0000000000000000 x24: ffff00003ae4f700
[ 2.167096] x23: ffff000010184c10 x22: ffff00003a3d6200
[ 2.172412] x21: ffff800011a464a8 x20: ffff000010126a68
[ 2.177729] x19: ffff00003ae5c800 x18: 000000000000000e
[ 2.183046] x17: 0000000000000001 x16: 0000000000000019
[ 2.188362] x15: 0000000000000004 x14: 000000000000004c
[ 2.193679] x13: 0000000000000000 x12: 0000000000000001
[ 2.198996] x11: 0000000000000000 x10: 00000000000009c0
[ 2.204313] x9 : ffff8000122bb7a0 x8 : ffff00003a3d6c20
[ 2.209630] x7 : ffff00003a3d6380 x6 : 0000000000000001
[ 2.214946] x5 : 0000000000000001 x4 : ffff00003a05eb18
[ 2.220263] x3 : 0000000000000005 x2 : ffff8000119f1c48
[ 2.225580] x1 : 2bcbda323bf5a800 x0 : 0000000000000000
[ 2.230898] Call trace:
[ 2.233345] kernfs_remove_by_name_ns+0xa0/0xb0
[ 2.237879] sysfs_remove_file_ns+0x14/0x20
[ 2.242065] device_del+0x12c/0x348
[ 2.245555] device_unregister+0x14/0x30
[ 2.249492] spi_unregister_controller+0xac/0x120
[ 2.254201] devm_spi_unregister+0x10/0x18
[ 2.258304] release_nodes+0x1a8/0x220
[ 2.262055] devres_release_all+0x34/0x58
[ 2.266069] really_probe+0x1b8/0x318
[ 2.269733] driver_probe_device+0x54/0xe8
[ 2.273833] __device_attach_driver+0x80/0xb8
[ 2.278194] bus_for_each_drv+0x74/0xc0
[ 2.282034] __device_attach+0xdc/0x138
[ 2.285876] device_initial_probe+0x10/0x18
[ 2.290063] bus_probe_device+0x90/0x98
[ 2.293901] deferred_probe_work_func+0x64/0x98
[ 2.298442] process_one_work+0x198/0x320
[ 2.302451] worker_thread+0x1f0/0x420
[ 2.306208] kthread+0xf0/0x120
[ 2.309352] ret_from_fork+0x10/0x18
[ 2.312927] ---[ end trace 58abcdfae01bd3c7 ]---
So put this function at the end of the probe sequence.
Signed-off-by: Clark Wang <xiaoning.wang@nxp.com>
Link: https://lore.kernel.org/r/20200727031448.31661-2-xiaoning.wang@nxp.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into next
More Arm SMMU updates for 5.9
- Move Arm SMMU driver files into their own subdirectory
|
|
'arm/exynos', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into next
|
|
Add support for 0xefa1 devices.
Link: https://lore.kernel.org/r/20200722140312.3651-5-galpress@amazon.com
Reviewed-by: Shadi Ammouri <sammouri@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Introduce a mechanism that performs an handshake between the userspace
provider and kernel driver which verifies that the user supports all
required features in order to operate correctly.
The handshake verifies the needed functionality by comparing the reported
device caps and the provider caps. If the device reports a non-zero
capability the appropriate comp mask is required from the userspace
provider in order to allocate the context.
Link: https://lore.kernel.org/r/20200722140312.3651-4-galpress@amazon.com
Reviewed-by: Shadi Ammouri <sammouri@amazon.com>
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
The device reports the minimum SQ size required for creation.
This patch queries the min SQ size and reports it back to the userspace
library.
Link: https://lore.kernel.org/r/20200722140312.3651-3-galpress@amazon.com
Reviewed-by: Firas JahJah <firasj@amazon.com>
Reviewed-by: Shadi Ammouri <sammouri@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
The device reports the maximum number of bytes to be written before
ringing the doorbell (zero means unlimited).
This patch queries the max batch size and reports it back to the userspace
library.
Link: https://lore.kernel.org/r/20200722140312.3651-2-galpress@amazon.com
Reviewed-by: Daniel Kranzdorf <dkkranzd@amazon.com>
Reviewed-by: Firas JahJah <firasj@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Move AMD Kconfig and Makefile bits down into the amd directory
with the rest of the AMD specific files.
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Link: https://lore.kernel.org/r/20200630200636.48600-3-jsnitsel@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Move Intel Kconfig and Makefile bits down into intel directory
with the rest of the Intel specific files.
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Link: https://lore.kernel.org/r/20200630200636.48600-2-jsnitsel@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
at the same time
The tcpm.c code for sending VDMs assumes that there will only be one VDM
in flight at the time. The "queue" used by tcpm_queue_vdm is only 1 entry
deep.
This assumes that the higher layers (tcpm state-machine and alt-mode
drivers) ensure that queuing a new VDM before the old one has been
completely send (or it timed out) add a WARN_ON to check for this.
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20200724174702.61754-6-hdegoede@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
alt-mode drivers
When we receive a PD data packet which ends up being for the alt-mode
driver we have the following lock order:
1. tcpm_pd_rx_handler take the tcpm-port lock
2. We call into the alt-mode driver which takes the alt-mode's lock
And when the alt-mode driver initiates communication we have the following
lock order:
3. alt-mode driver takes the alt-mode's lock
4. alt-mode driver calls tcpm_altmode_enter which takes the tcpm-port lock
This is a classic AB BA lock inversion issue.
With the refactoring of tcpm_handle_vdm_request() done before this patch,
we don't rely on, or need to make changes to the tcpm-port data by the
time we make call 2. from above. All data to be passed to the alt-mode
driver sits on our stack at this point, and thus does not need locking.
So after the refactoring we can simply fix this by releasing the
tcpm-port lock before calling into the alt-mode driver.
This fixes the following lockdep warning:
[ 191.454238] ======================================================
[ 191.454240] WARNING: possible circular locking dependency detected
[ 191.454244] 5.8.0-rc5+ #1 Not tainted
[ 191.454246] ------------------------------------------------------
[ 191.454248] kworker/u8:5/794 is trying to acquire lock:
[ 191.454251] ffff9bac8e30d4a8 (&dp->lock){+.+.}-{3:3}, at: dp_altmode_vdm+0x30/0xf0 [typec_displayport]
[ 191.454263]
but task is already holding lock:
[ 191.454264] ffff9bac9dc240a0 (&port->lock#2){+.+.}-{3:3}, at: tcpm_pd_rx_handler+0x43/0x12c0 [tcpm]
[ 191.454273]
which lock already depends on the new lock.
[ 191.454275]
the existing dependency chain (in reverse order) is:
[ 191.454277]
-> #1 (&port->lock#2){+.+.}-{3:3}:
[ 191.454286] __mutex_lock+0x7b/0x820
[ 191.454290] tcpm_altmode_enter+0x23/0x90 [tcpm]
[ 191.454293] dp_altmode_work+0xca/0xe0 [typec_displayport]
[ 191.454299] process_one_work+0x23f/0x570
[ 191.454302] worker_thread+0x55/0x3c0
[ 191.454305] kthread+0x138/0x160
[ 191.454309] ret_from_fork+0x22/0x30
[ 191.454311]
-> #0 (&dp->lock){+.+.}-{3:3}:
[ 191.454317] __lock_acquire+0x1241/0x2090
[ 191.454320] lock_acquire+0xa4/0x3d0
[ 191.454323] __mutex_lock+0x7b/0x820
[ 191.454326] dp_altmode_vdm+0x30/0xf0 [typec_displayport]
[ 191.454330] tcpm_pd_rx_handler+0x11ae/0x12c0 [tcpm]
[ 191.454333] process_one_work+0x23f/0x570
[ 191.454336] worker_thread+0x55/0x3c0
[ 191.454338] kthread+0x138/0x160
[ 191.454341] ret_from_fork+0x22/0x30
[ 191.454343]
other info that might help us debug this:
[ 191.454345] Possible unsafe locking scenario:
[ 191.454347] CPU0 CPU1
[ 191.454348] ---- ----
[ 191.454350] lock(&port->lock#2);
[ 191.454353] lock(&dp->lock);
[ 191.454355] lock(&port->lock#2);
[ 191.454357] lock(&dp->lock);
[ 191.454360]
*** DEADLOCK ***
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20200724174702.61754-5-hdegoede@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Refactor tcpm_handle_vdm_request and its tcpm_pd_svdm helper function so
that reporting the results of the vdm to the altmode-driver is separated
out into a clear separate step inside tcpm_handle_vdm_request, instead
of being scattered over various places inside the tcpm_pd_svdm helper.
This is a preparation patch for fixing an AB BA lock inversion between the
tcpm code and some altmode drivers.
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20200724174702.61754-4-hdegoede@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Refactor the tcpm_handle_vdm_request payload handling by doing the
endianness conversion only once directly inside tcpm_handle_vdm_request
itself instead of doing it multiple times inside various helper functions
called by tcpm_handle_vdm_request.
This is a preparation patch for some further refactoring to fix an AB BA
lock inversion between the tcpm code and some altmode drivers.
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20200724174702.61754-3-hdegoede@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Various callers (all the typec_altmode_ops) take the port-lock just for
the tcpm_queue_vdm() call.
Add a new tcpm_queue_vdm_unlocked() helper which takes the lock, so that
its callers don't have to do this themselves.
This is a preparation patch for fixing an AB BA lock inversion between
the tcpm code and some altmode drivers.
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20200724174702.61754-2-hdegoede@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
tcpm_queue_vdm()
All callers of tcpm_queue_vdm() immediately follow the tcpm_queue_vdm()
vdm call with a:
mod_delayed_work(port->wq, &port->vdm_state_machine, 0);
Call, fold this into tcpm_queue_vdm() itself.
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20200724174702.61754-1-hdegoede@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Have the driver use shared CQs provided by the rdma core driver. This
provides the advantage of improved efficiency handling interrupts.
Link: https://lore.kernel.org/r/20200722135629.49467-3-maxg@mellanox.com
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Have the driver use shared CQs provided by the rdma core driver. Since
this provides similar functionality to iser_comp it has been removed.
Now there is no reason to allocate very large CQs when the driver is
loaded while gaining the advantage of shared CQs. Previously when a single
connection was opened a CQ was opened for every core with enough space for
eight connections, this is a very large overhead that in most cases will
not be utilized.
Link: https://lore.kernel.org/r/20200722135629.49467-2-maxg@mellanox.com
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Have the driver use shared CQs provided by the rdma core driver. Since
this provides similar functionality to iser_comp it has been removed. Now
there is no reason to allocate very large CQs when the driver is loaded
while gaining the advantage of shared CQs.
Link: https://lore.kernel.org/r/20200722135629.49467-1-maxg@mellanox.com
Signed-off-by: Yamin Friedman <yaminf@mellanox.com>
Acked-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
The nasty TODO items are done.
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Link: https://lore.kernel.org/r/20200729003531.907370-1-samuel.thibault@ens-lyon.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Drivers using legacy power management .suspen()/.resume() callbacks
have to manage PCI states and device's PM states themselves. They also
need to take care of standard configuration registers.
Switch to generic power management framework using a single
"struct dev_pm_ops" variable to take the unnecessary load from the driver.
This also avoids the need for the driver to directly call most of the PCI
helper functions and device power state control functions, as through
the generic framework PCI Core takes care of the necessary operations,
and drivers are required to do only device-specific jobs.
Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com>
Link: https://lore.kernel.org/r/20200728123349.1331679-1-vaibhavgupta40@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Removed braces for a 'if' condition as it contain only single line &
there is no need for braces for such case according to coding style
rules.
Signed-off-by: Ankit Baluni <b18007@students.iitmandi.ac.in>
Link: https://lore.kernel.org/r/20200729074541.1972-1-b18007@students.iitmandi.ac.in
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The variable ret is being assigned an error return value that is never
read, the control passes to a return statement and ret is never referenced.
Remove the redundant assignment. Also remove an empty line.
Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Link: https://lore.kernel.org/r/20200729100525.573500-1-colin.king@canonical.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This patch removes the dependency to NET as it is no longer needed.
Signed-off-by: Christian Gromm <christian.gromm@microchip.com>
Link: https://lore.kernel.org/r/1595927275-27462-1-git-send-email-christian.gromm@microchip.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb into usb-next
Peter writes:
ENDIAN issue fix and one query controller role API is introduced.
* tag 'usb-ci-v5.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb:
usb: chipidea: imx: get available runtime dr mode for wakeup setting
usb: chipidea: add query_available_role interface
Documentation: ABI: usb: chipidea: Update Li Jun's e-mail
usb: chipidea: udc: fix the ENDIAN issue
|
|
Now the calls to enable/disable a thermal zone are centralized in a
call to a function, we can add in these the corresponding netlink
notifications.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Zhang Rui <rui.zhang@intel.com>
Reviewed-by: Amit Kucheria <amit.kucheria@linaro.org>
Link: https://lore.kernel.org/r/20200727231033.26512-1-daniel.lezcano@linaro.org
|
|
Signed-off-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20200729000313.GA14680@e442e3f624c4
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Unlike full initialization like normal boot, guest driver won't
pv_notified GVT when vGPU transit from D3->D0. If pv_notified is reset,
later vGPU operations will trigger enter into failsafe mode.
Considering the fact that vGPU will at least notify GVT pv_notified once
before D3/D0 transition, it's safe to skip reset pv_notified in D3->D0.
To test this feature, make sure S3 is enabled in QEMU parameters:
i440fx: PIIX4_PM.disable_s3=0
q35: ICH9-LPC.disable_s3=0
Also need enable sleep option in guest OS if it's disabled.
v2:
- Revise commit message to more accurate description. (Kevin)
- Split patch by logic. (Zhenyu)
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200709071002.247960-3-colin.xu@intel.com
|
|
When system enters S3 state, device enters D3 state while RAM remains
powered. From vGPU/GVT perspective, ppgtt_mm is residual in guest memory
during vGPU in D3 state, so that when guest state transits from S3->S0,
ppgtt_mm can be re-used and no need rebuild.
Previous implementation invalidate and destroy ppgtt_mm at DMLR,
regardless the power state transition is S0->S3->S0 (guest suspend or
resume) or OFF->S0 (normal boot/reboot), invalidate and destroy ppgtt_mm
is unnecessary in the former transition case.
The patch saves the vGPU D3/D0 transition state when guest writes the
PCI_PM_CTRL in vGPU's configure space, then in later DMLR, GVT can decide
whether or not invalidate and destroy ppgtt_mm is required. The
d3_entered flags is reset after DMLR.
To test this feature, make sure S3 is enabled in QEMU parameters:
i440fx: PIIX4_PM.disable_s3=0
q35: ICH9-LPC.disable_s3=0
Also need enable sleep option in guest OS if it's disabled.
v2:
- Revise commit message to more accurate description. (Kevin)
- Split patch by logic. (Zhenyu)
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200709071002.247960-2-colin.xu@intel.com
|
|
There's no need to try to be cute with the include file locations in the
Makefile, so just specify exactly where the files are.
Bonus is this fixes the problem of building with O= as well as trying to
just build the subdirectory alone.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Oded Gabbay <oded.gabbay@gmail.com>
Cc: Omer Shpigelman <oshpigelman@habana.ai>
Cc: Tomer Tayar <ttayar@habana.ai>
Cc: Moti Haimovski <mhaimovski@habana.ai>
Cc: Ofir Bitton <obitton@habana.ai>
Cc: Ben Segal <bpsegal20@gmail.com>
Cc: Christine Gharzuzi <cgharzuzi@habana.ai>
Cc: Pawel Piskorski <ppiskorski@habana.ai>
Link: https://lore.kernel.org/r/20200728171851.55842-1-gregkh@linuxfoundation.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Add a quirk for a device that does not support the Identify Namespace
Identification Descriptor list despite claiming 1.3 compliance.
Fixes: ea43d9709f72 ("nvme: fix identify error status silent ignore")
Reported-by: Ingo Brunberg <ingo_brunberg@web.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Ingo Brunberg <ingo_brunberg@web.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
|
|
We can call the nvme_change_ctrl_state() directly and have
WARN_ON_ONCE(1) call instead of having to use an extra variable which
matches the name of the function.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
When creating a loop controller (ctrl) in nvme_loop_create_ctrl() ->
nvme_init_ctrl() we set the ctrl state to NVME_CTRL_NEW.
Prior to [1] NVME_CTRL_NEW state was allowed in nvmf_check_ready() for
fabrics command type connect. Now, this fails in the following code path
for fabrics connect command when creating admin queue :-
nvme_loop_create_ctrl()
nvme_loo_configure_admin_queue()
nvmf_connect_admin_queue()
__nvme_submit_sync_cmd()
blk_execute_rq()
nvme_loop_queue_rq()
nvmf_check_ready()
# echo "transport=loop,nqn=fs" > /dev/nvme-fabrics
[ 6047.741327] nvmet: adding nsid 1 to subsystem fs
[ 6048.756430] nvme nvme1: Connect command failed, error wo/DNR bit: 880
We need to set the ctrl state to NVME_CTRL_CONNECTING after :-
nvme_loop_create_ctrl()
nvme_init_ctrl()
so that the above mentioned check for nvmf_check_ready() will return
true.
This patch sets the ctrl state to connecting after we init the ctrl in
nvme_loop_create_ctrl()
nvme_init_ctrl() .
[1] commit aa63fa6776a7 ("nvme-fabrics: allow to queue requests for live queues")
Fixes: aa63fa6776a7 ("nvme-fabrics: allow to queue requests for live queues")
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
When nvme_round_robin_path() finds a valid namespace we should be using it;
falling back to __nvme_find_path() for non-optimized paths will cause the
result from nvme_round_robin_path() to be ignored for non-optimized paths.
Fixes: 75c10e732724 ("nvme-multipath: round-robin I/O policy")
Signed-off-by: Martin Wilck <mwilck@suse.com>
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Handle the special case where we have exactly one optimized path,
which we should keep using in this case.
Fixes: 75c10e732724 ("nvme-multipath: round-robin I/O policy")
Signed off-by: Martin Wilck <mwilck@suse.com>
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
commit fe35ec58f0d3 ("block: update hctx map when use multiple maps")
exposed an issue where we may hang trying to wait for queue freeze
during I/O. We call blk_mq_update_nr_hw_queues which in case of multiple
queue maps (which we have now for default/read/poll) is attempting to
freeze the queue. However we never started queue freeze when starting the
reset, which means that we have inflight pending requests that entered the
queue that we will not complete once the queue is quiesced.
So start a freeze before we quiesce the queue, and unfreeze the queue
after we successfully connected the I/O queues (and make sure to call
blk_mq_update_nr_hw_queues only after we are sure that the queue was
already frozen).
This follows to how the pci driver handles resets.
Fixes: fe35ec58f0d3 ("block: update hctx map when use multiple maps")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
commit fe35ec58f0d3 ("block: update hctx map when use multiple maps")
exposed an issue where we may hang trying to wait for queue freeze
during I/O. We call blk_mq_update_nr_hw_queues which in case of multiple
queue maps (which we have now for default/read/poll) is attempting to
freeze the queue. However we never started queue freeze when starting the
reset, which means that we have inflight pending requests that entered the
queue that we will not complete once the queue is quiesced.
So start a freeze before we quiesce the queue, and unfreeze the queue
after we successfully connected the I/O queues (and make sure to call
blk_mq_update_nr_hw_queues only after we are sure that the queue was
already frozen).
This follows to how the pci driver handles resets.
Fixes: fe35ec58f0d3 ("block: update hctx map when use multiple maps")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
This patch updates KConfig file for the NVMeOF target where we add new
option so that user can selectively enable/disable passthru code.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
[logang@deltatee.com: fixed some of the wording in the help message]
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
When CONFIG_NVME_TARGET_PASSTHRU as 'passthru' directory will
be added to each subsystem. The directory is similar to a namespace
and has two attributes: device_path and enable. The user must set the
path to the nvme controller's char device and write '1' to enable the
subsystem to use passthru.
Any given subsystem is prevented from enabling both a regular namespace
and the passthru device. If one is enabled, enabling the other will
produce an error.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
This patch adds helper functions which are used in the NVMeOF configfs
when the user is configuring the passthru subsystem. Here we ensure
that only one subsys is assigned to each nvme_ctrl by using an xarray
on the cntlid.
The subsystem's version number is overridden by the passed through
controller's version. However, if that version is less than 1.2.1,
then we bump the advertised version to that and print a warning
in dmesg.
Based-on-a-patch-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Add passthru command handling capability for the NVMeOF target and
export passthru APIs which are used to integrate passthru
code with nvmet-core.
The new file passthru.c handles passthru cmd parsing and execution.
In the passthru mode, we create a block layer request from the nvmet
request and map the data on to the block layer request.
Admin commands and features are on an allow list as there are a number
of each that don't make too much sense with passthrough. We use an
allow list such that new commands can be considered before being blindly
passed through. In both cases, vendor specific commands are always
allowed.
We also reject reservation IO commands as the underlying device cannot
differentiate between multiple hosts behind a fabric.
Based-on-a-patch-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|