Age | Commit message (Collapse) | Author |
|
Fix a couple of checkpatch issues
Signed-off-by: Paul McQuade <paulmcquad@gmail.com>
[seanpaul squashed series of 4 into one patch, and changed commit msg]
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20180319005225.1545-1-paulmcquad@gmail.com
|
|
Use the USB class define rather than a magic number when refusing to
bind to mass-storage interfaces.
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
|
|
Drop redundant interface-class test for Samsung GT-B3730 modems for
which we only match and probe the CDC data interface.
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
|
|
Reimplement interface masking using device flags stored directly in the
device-id table. This will make it easier to add and maintain device-id
entries by using a more compact and readable notation compared to the
current implementation (which manages pairs of masks in separate
blacklist structs).
Two convenience macros are used to flag an interface as either reserved
or as not supporting modem-control requests:
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
.driver_info = NCTRL(0) | RSVD(3) },
For now, we limit the highest maskable interface number to seven, which
allows for (up to 16) additional device flags to be added later should
need arise.
Note that this will likely need to be backported to stable in order to
make future device-id backports more manageable.
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
|
|
The adreno driver stopped building when CONFIG_DEBUGFS is disabled:
drivers/gpu/drm/msm/adreno/adreno_device.c: In function 'adreno_load_gpu':
drivers/gpu/drm/msm/adreno/adreno_device.c:153:16: error: 'const struct msm_gpu_funcs' has no member named 'debugfs_init'
if (gpu->funcs->debugfs_init) {
^~
drivers/gpu/drm/msm/adreno/adreno_device.c:154:13: error: 'const struct msm_gpu_funcs' has no member named 'debugfs_init'
gpu->funcs->debugfs_init(gpu, dev->primary);
^~
This adds an #ifdef around the code that references the hidden
pointer.
Fixes: 331dc0bc195b ("drm/msm: add a5xx specific debugfs")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
If there is only a single DSI interface, don't reserve the first two
layer-mixers for the dual-DSI use-case.
This was causing problems for WB, not being able to assign a LM, on
8x16, which has only two LM's and a single DSI.
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
For some reason, layer-mixer 3 and 4 were missing. LM3 is used for
writeback on 8x16.
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
For DSI cmd-mode and writeback, we need to write the CTL's START
register to kick things off, but we only want to do that once both
the encoder and the crtc have a chance to write their corresponding
flush bits. The difficulty is that when there is a full modeset
(ie. encoder state has changed) we want to defer the start until
encoder->enable(). But if only plane's have changed, we want to do
this from crtc->commit().
The start_mask was a previous attempt to handle this, but it didn't
really do the right thing since atomic conversion.
Instead track in the crtc state that the start should be deferred,
set to try from encoder's (or in future writeback's) atomic_check().
This way the state is part of the atomic state, and rollback can
work properly if an atomic test fails.
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
Interrupt commands causes the CP to trigger an interrupt as the command
is processed, regardless of the GPU being done processing previous
commands. This is seen by the interrupt being delivered before the
fence is written on 8974 and is likely the cause of the additional
CP_WAIT_FOR_IDLE workaround found for a306, which would cause the CP to
wait for the GPU to go idle before triggering the interrupt.
Instead we can set the (undocumented) BIT(31) of the CACHE_FLUSH_TS
which will cause a special CACHE_FLUSH_TS interrupt to be triggered from
the GPU as the write event is processed.
Add CACHE_FLUSH_TS to the IRQ masks of A3xx and A4xx and remove the
workaround for A306.
Suggested-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
This should be using drm_gem_object_put(). Also since this is done only
in driver unload path, we don't need to synchronize setting tx_gem_obj
to NULL, so juse use the _unlocked() variant.
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
Remnants of pre-dma_fence fencing which got left behind by mistake.
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
Since new display controller is called "dpu" instead of "mdp". Lets
make the name of the toplevel directory for the display controllers a
bit more generic.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Sean Paul <seanpaul@chromium.org>
|
|
_dev_ is being dereferenced before it is null checked, hence there
is a potential null pointer dereference.
Fix this by moving the pointer dereference after _dev_ has been
null checked.
Fixes: d4e7f38d70ef ("drm/msm/dsi: check msm_dsi and dsi pointers before use")
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
_minor_ is being dereferenced before it is null checked, hence there
is a potential null pointer dereference. Fix this by moving the pointer
dereference after _minor_ has been null checked.
Fixes: 024ad8df763f ("drm/msm: add a5xx specific debugfs")
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Signed-off-by: Rob Clark <robdclark@gmail.com>
|
|
This fixes use after free introduced by the last cc770 patch.
Signed-off-by: Andri Yngvason <andri.yngvason@marel.com>
Fixes: 746201235b3f ("can: cc770: Fix queue stall & dropped RTR reply")
Cc: linux-stable <stable@vger.kernel.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
Files have been moved in the NAND subsystem to reflect the different
flavors of NAND devices.
Raw/Parallel NAND devices have been moved to a "raw" subdirectory to
make the difference with OneNAND and SPI NAND for instance. So adjust
the Kconfig entry to clarify things.
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
|
|
Tegra186 powergate driver is implemented as power domain driver, power
partition ungate/gate are registered as power_on/power_off callback
functions. There are no direct functions to power gate/ungate host
controller in Tegra186. Host controller driver should add "power-domains"
property in device tree and implement runtime suspend and resume
callback functons. Power gate and ungate is taken care by power domain
driver when host controller driver calls pm_runtime_put_sync and
pm_runtime_get_sync respectively.
Register suspend_noirq & resume_noirq callback functions to allow PCIe to
come up after resume from RAM. Both runtime and noirq pm ops share same
callback functions.
Signed-off-by: Manikanta Maddireddy <mmaddireddy@nvidia.com>
[lorenzo.pieralisi@arm.com: squashed patch to fix compilation]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
|
|
Our shadow context content is from guest but with masked control reg like
CTX_CONTEXT_CONTROL, we need to make sure all settings from guest would be set
when this context is on hw, this trys to force mask enable bits for all to
ensure every bits setting would be effective on hw.
One regression found related to once inhibit bit is set, gpu engine are working
on inhibit state until MI_LOAD_REG_IMM command or context image clear inhibit
bit with mask bit set to 1, and val bit set to 0. In gvt-g currently workload
has the highest priority, so gvt-g workload could trigger preempt context
easily, preempt context set inhibit bit, then gvt-g workload is scheduled in,
but gvt-g workload shadow context image usually doesn't set inhibit mask bit,
so gpu is still in inhibit state when gvt workload is running. This caused gpu
hang.
Suggested-by: Zhang, Xiong <xiong.y.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Zhang, Xiong <xiong.y.zhang@intel.com>
|
|
Section was not properly computed. The value of OOB region definition is
always ECC section 0 information in the OOB area, but we want to get all
the ECC bytes information, so we should call
mtd_ooblayout_ecc(mtd, section++, &oobregion) until it returns -ERANGE.
Fixes: c2b78452a9db ("mtd: use mtd_ooblayout_xxx() helpers where appropriate")
Cc: <stable@vger.kernel.org>
Signed-off-by: OuYang ZhiZhong <ouyzz@yealink.com>
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
|
|
The generic DMA API uses dev->dma_mask to check the DMA addressable
memory bitmask, and warns if no mask is set or even allocated.
Set z->dev.dma_coherent_mask on Zorro bus scan, and make z->dev.dma_mask
to point to z->dev.dma_coherent_mask so device drivers that need DMA have
everything set up to avoid warnings from dma_alloc_coherent(). Drivers can
still use dma_set_mask_and_coherent() to explicitly set their DMA bit mask.
Signed-off-by: Michael Schmitz <schmitzmic@gmail.com>
[geert: Handle Zorro II with 24-bit address space]
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
|
|
The PMU watchdog will power down the system if the kernel is slow
to start up, e.g. due to unpacking a large initrd. The powerpc
version of this driver (via-pmu.c) has a solution for the same
problem. It uses this call sequence:
setup_arch
find_via_pmu
init_pmu
...
arch_initcall
via_pmu_start
Bring via-pmu68k.c into line with via-pmu.c to fix this issue.
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
|
|
Revert commit c68f0676ef7d ("ACPI / battery: Add quirk for Asus
GL502VSK and UX305LA") and commit 4446823e2573 ("ACPI / battery: Add
quirk for Asus UX360UA and UX410UAK").
On many many Asus products, the battery is sometimes reported as
charging or discharging even when it is full and you are on AC power.
This change quirked the kernel to avoid advertising the discharging
state when this happens on 4 laptop models, under the belief that
this was incorrect information. I presume it originates from user
reports who are confused that their battery status icon says that it
is discharging.
However, the reported information is indeed correct, and the quirk
approach taken is inadequate and more thought is needed first.
Specifically:
1. It only quirks discharging state, not charging
2. There are so many different Asus products and DMI naming variants
within those product families that behave this way; Linux could
grow to quirk hundreds of products and still not even be close at
"winning" this battle.
3. Asus previously clarified that this behaviour is intentional. The
platform will periodically do a partial discharge/charge cycle
when the battery is full, because this is one way to extend the
lifetime of the battery (leaving a battery at 100% charge and
unused will decrease its usable capacity over time).
My understanding is that any decent consumer product will have
this behaviour, but it appears that Asus is different in that
they expose this info through ACPI.
However, the behaviour seems correct. The ACPI spec does not
suggest in that the platform should hide the truth. It lets you
report that the battery is full of charge, and discharging, and
with external power connected; and Asus does this.
4. In terms of not confusing the user, this seems like something that
could/should be handled by userspace, which can also detect these
same (accurate) conditions in the general case.
Revert this quirk before it gets included in a release, while we look
for better approaches.
Signed-off-by: Daniel Drake <drake@endlessm.com>
Acked-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Since commit 846c7dfc1193 ("drm/atomic: Try to preserve the crtc enabled
state in drm_atomic_remove_fb, v2."), removing the last framebuffer will
no longer disable the corresponding pipeline, which causes the KMS core
to complain about leaked connectors on driver unbind.
Fix this by calling drm_atomic_helper_shutdown() on driver unbind, which
will cause all display pipelines to be shut down and therefore drop the
extra references on the connectors.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The regulator is controlled as part of runtime PM, so it should not be
additionally disabled from the ->exit() callback.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Detaching from an IOMMU group multiple times can lead to a crash. This
could potentially be fixed in the IOMMU driver, but it's easy to avoid
the subsequent detach operations in this driver, so do that as well.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
In
dwc3_request *r = NULL;
r = A;
the first assignment has no effect. Remove it.
Signed-off-by: Heinrich Schuchardt <xypron.glpk@gmx.de>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
|
|
When immediate quiet bit is set in CSA, the entire channel is blocked
by the firmware. It is expected that all the MACs will evacuate the
channel and the phy will be eventually either moved or removed.
Currently, the phy context is just unreferenced and thus, the quiet
bit is kept set and it will be impossible to TX on this phy, if we
will need to reuse it in the future. This can be seen when doing a
channel switch with mode=1 (quiet) twice from channel X to Y and then
back to channel X.
Fix that, by moving the phy context to a default channel when not
referenced anymore.
Signed-off-by: Andrei Otcheretianski <andrei.otcheretianski@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
When starting aggregation, the code checks the status of the queue
allocated to the aggregation tid, which might not yet be allocated
and thus the queue index may be invalid.
Fix this by reserving a new queue in case the queue id is invalid.
While at it, clean up some unreachable code (a condition that is
already handled earlier) and remove all the non-DQA comments since
non-DQA mode is no longer supported.
Fixes: cf961e16620f ("iwlwifi: mvm: support dqa-mode agg on non-shared queue")
Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
If the driver failed to resume from D3, it is possible that it has
no valid aux station. In such case, fw restart will end up in sending
station related commands with an invalid station id, which will
result in an assert.
Fix this by allocating a new station id for the aux station if it
does not have a valid id even in the case of fw restart.
Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
When a queue is reserved for aggregation, the queue id is assigned
to the tid_data. This is fine since iwl_mvm_sta_tx_agg_oper()
takes care of allocating the queue before actual tx starts.
When the reservation is cancelled (e.g. when the AP declined the
aggregation request) the tid_data is not cleared. As a result,
following tx for this tid was trying to use an unallocated queue.
Fix this by setting the txq_id for the tid to invalid when unreserving
the queue.
Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
After switching to a new channel, driver schedules session protection
time event in order to hear the beacon on the new channel.
The duration of the protection is two beacon intervals.
However, since we start to switch slightly before beacon with count 1, in
case we don't hear (or AP doesn't transmit) the very first beacon on the
new channel the protection ends without hearing any beacon at all.
At this stage the switch is not complete, the queues are closed and the
interface doesn't have quota yet or TBTT events. As the result, we are
stuck forever waiting for iwl_mvm_post_channel_switch() to be called.
Fix this by increasing the protection time to be 3 beacon intervals and
in addition drop the connection if the time event ends before we got any
beacon.
Signed-off-by: Andrei Otcheretianski <andrei.otcheretianski@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Make use of of_reset_control_array_get_exclusive() to manage
an array of reset controllers available with the device.
Cc: Jon Hunter <jonathanh@nvidia.com>
Cc: Thierry Reding <treding@nvidia.com>
Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org>
[p.zabel@pengutronix.de: switch to hidden reset control array]
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The PDPs of a shadow page will only be valid after a vGPU mm is pinned.
So the PDPs in the shadow context should be updated then.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
|
|
As different OSes might handling GVT PPGTT creation/destroy notification
differently during a vGPU reset. A better approach is invalidating all
vGPU PPGTT mm objects during vGPU reset.
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
|
|
Out-of-memory error must be handled correctly.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
|
|
Trivial fix to spelling mistake in gvt_err error message text.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
|
|
Reduntant message prints when:
- linux guest creating.
- dma-buf win10 guest boot.
- xonotic stress testing in linux guest.
Add below registers to default MMIO handler:
0xd00, RPM_CONFIG0
0xd40, RC6_LOCATION
0x65010, HSW_AUD_MISC_CTRL
0x6671c,
0x700a0, CUR_FBC_CTL
0x7239c,
v2:
- Should init i915_reg_t using uint32_t instead of _MMIO macro.
(compiling errors)
- Use defined offset in i915_reg.h
(zhenyu)
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
|
|
We want the staging fixes in here as well to handle merge/test issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Avoid that building with W=1 triggers the following compiler warning:
drivers/md/bcache/super.c:776:20: warning: comparison is always false due to limited range of data type [-Wtype-limits]
d->nr_stripes > SIZE_MAX / sizeof(atomic_t)) {
^
Reviewed-by: Coly Li <colyli@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add more annotations for sparse to inform it about which functions do
not have the same number of spin_lock() and spin_unlock() calls.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
This patch does not change any functionality.
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Avoid that building with W=1 triggers warnings about the kernel-doc
headers.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
This patch avoids that building with W=1 triggers complaints about
switch fall-throughs.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Make it possible for the compiler to verify the consistency of the
format string passed to __bch_check_keys() and the arguments that
should be formatted according to that format string.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
This patch avoids that smatch complains about inconsistent indentation.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
If a bcache device is configured to writeback mode, current code does not
handle write I/O errors on backing devices properly.
In writeback mode, write request is written to cache device, and
latter being flushed to backing device. If I/O failed when writing from
cache device to the backing device, bcache code just ignores the error and
upper layer code is NOT noticed that the backing device is broken.
This patch tries to handle backing device failure like how the cache device
failure is handled,
- Add a error counter 'io_errors' and error limit 'error_limit' in struct
cached_dev. Add another io_disable to struct cached_dev to disable I/Os
on the problematic backing device.
- When I/O error happens on backing device, increase io_errors counter. And
if io_errors reaches error_limit, set cache_dev->io_disable to true, and
stop the bcache device.
The result is, if backing device is broken of disconnected, and I/O errors
reach its error limit, backing device will be disabled and the associated
bcache device will be removed from system.
Changelog:
v2: remove "bcache: " prefix in pr_error(), and use correct name string to
print out bcache device gendisk name.
v1: indeed this is new added in v2 patch set.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
In order to catch I/O error of backing device, a separate bi_end_io
call back is required. Then a per backing device counter can record I/O
errors number and retire the backing device if the counter reaches a
per backing device I/O error limit.
This patch adds backing_request_endio() to bcache backing device I/O code
path, this is a preparation for further complicated backing device failure
handling. So far there is no real code logic change, I make this change a
separate patch to make sure it is stable and reliable for further work.
Changelog:
v2: Fix code comments typo, remove a redundant bch_writeback_add() line
added in v4 patch set.
v1: indeed this is new added in this patch set.
[mlyle: truncated commit subject]
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
In current code closure debug file is outside of debug directory
and when unloading module there is lack of removing operation
for closure debug file, so it will cause creating error when trying
to reload module.
This patch move closure debug file into "bcache" debug direcory
so that the file can get deleted properly.
Signed-off-by: Chengguang Xu <cgxu519@gmx.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Tang Junhui <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|