Age | Commit message (Collapse) | Author |
|
With patch
| dd8088d5a896 PM: runtime: Add pm_runtime_resume_and_get to deal with usage counter
the usual pm_runtime_get_sync() and pm_runtime_put_noidle() in-case-of-error
dance is no longer needed. Convert the m_can driver to use this function.
Link: https://lore.kernel.org/r/20201212175518.139651-6-mkl@pengutronix.de
Reviewed-by: Sean Nyekjaer <sean@geanix.com>
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
The function m_can_config_endisable() is not used outside of the m_can driver,
so mark it as static.
Link: https://lore.kernel.org/r/20201212175518.139651-5-mkl@pengutronix.de
Reviewed-by: Sean Nyekjaer <sean@geanix.com>
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
This patch coverts the m_can driver to use cdev as name for struct
m_can_classdev uniformly throughout the whole driver.
Link: https://lore.kernel.org/r/20201212175518.139651-4-mkl@pengutronix.de
Reviewed-by: Sean Nyekjaer <sean@geanix.com>
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
This patch converts the indention in the m_can driver to kernel coding style.
Link: https://lore.kernel.org/r/20201212175518.139651-3-mkl@pengutronix.de
Reviewed-by: Sean Nyekjaer <sean@geanix.com>
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
Old versions of the user manual are regularly depublished, so change link to
the linux-can github page, which has a mirror off all published datasheets.
Link: https://lore.kernel.org/r/20201212175518.139651-2-mkl@pengutronix.de
Reviewed-by: Sean Nyekjaer <sean@geanix.com>
Reviewed-by: Dan Murphy <dmurphy@ti.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
We have no way of tracking server READ_PLUS support in pNFS for now, so
just disable it.
Reported-by: "Mkrtchyan, Tigran" <tigran.mkrtchyan@desy.de>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
If the server returns more data than we have buffer space for, then
we need to truncate and exit early.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Expanding the READ_PLUS extents can cause the read buffer to overflow.
If it does, then don't error, but just exit early.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
If a hole extends beyond the READ_PLUS read buffer, then we want to fill
just the remaining buffer with zeros. Also ignore eof...
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
The server is allowed to return a hole extent with an offset that starts
before the offset supplied in the READ_PLUS argument. Ensure that we
support that case too.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
All XDR opaque object sizes are 32-bit aligned, and a data segment is no
exception.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
If we're shifting the page data to the right, and this happens to be a
sparse page array, then we may need to allocate new pages in order to
receive the data.
Reported-by: "Mkrtchyan, Tigran" <tigran.mkrtchyan@desy.de>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
There are a number of xdr helpers for struct xdr_buf that do not change
the structure itself. Mark those as taking const pointers for
documentation purposes.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Move the setting of the xdr_stream 'nwords' field into the helpers that
reset the xdr_stream cursor.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Clean up callers of _copy_to/from_pages() that still check for a zero
length.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Clean up xdr_shrink_bufhead() to use the new helpers instead of doing
its own thing.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
We do want to try to grow the buffer if possible, but if that attempt
fails, we still want to move the data and truncate the XDR message.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
The main use case right now for xdr_align_data() is to shift the page
data to the left, and in practice shrink the total XDR data buffer.
This patch ensures that we fix up the accounting for the buffer length
as we shift that data around.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Exit early if the shift is zero.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Use the existing BITS_PER_LONG macro instead of calculating the value.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Olga K. observed that rpcrdma_marsh_req() allocates sparse pages
only when it has determined that a Reply chunk is necessary. There
are plenty of cases where no Reply chunk is needed, but the
XDRBUF_SPARSE_PAGES flag is set. The result would be a crash in
rpcrdma_inline_fixup() when it tries to copy parts of the received
Reply into a missing page.
To avoid crashing, handle sparse page allocation up front.
Until XATTR support was added, this issue did not appear often
because the only SPARSE_PAGES consumer always expected a reply large
enough to always require a Reply chunk.
Reported-by: Olga Kornievskaia <kolga@netapp.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
XDRBUF_SPARSE_PAGES can cause problems for the RDMA transport,
and it's easy enough to allocate enough pages for the request
up front, so do that.
Also, since we've allocated the pages anyway, use the full
page aligned length for the receive buffer. This will allow
caching of valid replies that are too large for the caller,
but that still fit in the allocated pages.
Signed-off-by: Frank van der Linden <fllinden@amazon.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
When receiving pages data, return value 'ret' when positive includes
`buf->page_base`, so we should subtract that before it is used for
changing `offset` and comparing against `want`.
This was discovered on the very rare cases where the server returned a
chunk of bytes that when added to the already received amount of bytes
for the pages happened to match the current `recv.len`, for example
on this case:
buf->page_base : 258356
actually received from socket: 1740
ret : 260096
want : 260096
In this case neither of the two 'if ... goto out' trigger, and we
continue to tail parsing.
Worth to mention that the ensuing EMSGSIZE from the continued execution of
`xs_read_xdr_buf` may be observed by an application due to 4 superfluous
bytes being added to the pages data.
Fixes: 277e4ab7d530 ("SUNRPC: Simplify TCP receive code by switching to using iterators")
Signed-off-by: Dan Aloni <dan@kernelim.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
edac-updates-for-v5.11
Signed-off-by: Borislav Petkov <bp@suse.de>
|
|
When xen_blkif_disconnect() is called, the kernel thread behind the
block interface is stopped by calling kthread_stop(ring->xenblkd).
The ring->xenblkd thread pointer being non-NULL determines if the
thread has been already stopped.
Normally, the thread's function xen_blkif_schedule() sets the
ring->xenblkd to NULL, when the thread's main loop ends.
However, when the thread has not been started yet (i.e.
wake_up_process() has not been called on it), the xen_blkif_schedule()
function would not be called yet.
In such case the kthread_stop() call returns -EINTR and the
ring->xenblkd remains dangling.
When this happens, any consecutive call to xen_blkif_disconnect (for
example in frontend_changed() callback) leads to a kernel crash in
kthread_stop() (e.g. NULL pointer dereference in exit_creds()).
This is XSA-350.
Cc: <stable@vger.kernel.org> # 4.12
Fixes: a24fa22ce22a ("xen/blkback: don't use xen_blkif_get() in xen-blkback kthread")
Reported-by: Olivier Benjamin <oliben@amazon.com>
Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Signed-off-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Reviewed-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
'xenbus_backend' watches 'state' of devices, which is writable by
guests. Hence, if guests intensively updates it, dom0 will have lots of
pending events that exhausting memory of dom0. In other words, guests
can trigger dom0 memory pressure. This is known as XSA-349. However,
the watch callback of it, 'frontend_changed()', reads only 'state', so
doesn't need to have the pending events.
To avoid the problem, this commit disallows pending watch messages for
'xenbus_backend' using the 'will_handle()' watch callback.
This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reported-by: Michael Kurth <mku@amazon.de>
Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
This commit adds a counter of pending messages for each watch in the
struct. It is used to skip unnecessary pending messages lookup in
'unregister_xenbus_watch()'. It could also be used in 'will_handle'
callback.
This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reported-by: Michael Kurth <mku@amazon.de>
Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
This commit adds support of the 'will_handle' watch callback for
'xen_bus_type' users.
This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reported-by: Michael Kurth <mku@amazon.de>
Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
Some code does not directly make 'xenbus_watch' object and call
'register_xenbus_watch()' but use 'xenbus_watch_path()' instead. This
commit adds support of 'will_handle' callback in the
'xenbus_watch_path()' and it's wrapper, 'xenbus_watch_pathfmt()'.
This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reported-by: Michael Kurth <mku@amazon.de>
Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
If handling logics of watch events are slower than the events enqueue
logic and the events can be created from the guests, the guests could
trigger memory pressure by intensively inducing the events, because it
will create a huge number of pending events that exhausting the memory.
Fortunately, some watch events could be ignored, depending on its
handler callback. For example, if the callback has interest in only one
single path, the watch wouldn't want multiple pending events. Or, some
watches could ignore events to same path.
To let such watches to volutarily help avoiding the memory pressure
situation, this commit introduces new watch callback, 'will_handle'. If
it is not NULL, it will be called for each new event just before
enqueuing it. Then, if the callback returns false, the event will be
discarded. No watch is using the callback for now, though.
This is part of XSA-349
Cc: stable@vger.kernel.org
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reported-by: Michael Kurth <mku@amazon.de>
Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
syzbot spotted a potential out-of-bounds shift in the PCM OSS layer
where it calculates the buffer size with the arbitrary shift value
given via an ioctl.
Add a range check for avoiding the undefined behavior.
As the value can be treated by a signed integer, the max shift should
be 30.
Reported-by: syzbot+df7dc146ebdd6435eea3@syzkaller.appspotmail.com
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20201209084552.17109-2-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
syzbot spotted a potential out-of-bounds shift in the USB-audio format
parser that receives the arbitrary shift value from the USB
descriptor.
Add a range check for avoiding the undefined behavior.
Reported-by: syzbot+df7dc146ebdd6435eea3@syzkaller.appspotmail.com
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20201209084552.17109-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Use ERR_CAST() when devm_ioremap_resource() fails.
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Link: https://lore.kernel.org/r/20201214065421.5138-1-peter.ujfalusi@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
"A set of x86 and membarrier fixes:
- Correct a few problems in the x86 and the generic membarrier
implementation. Small corrections for assumptions about visibility
which have turned out not to be true.
- Make the PAT bits for memory encryption correct vs 4K and 2M/1G
page table entries as they are at a different location.
- Fix a concurrency issue in the the local bandwidth readout of
resource control leading to incorrect values
- Fix the ordering of allocating a vector for an interrupt. The order
missed to respect the provided cpumask when the first attempt of
allocating node local in the mask fails. It then tries the node
instead of trying the full provided mask first. This leads to
erroneous error messages and breaking the (user) supplied affinity
request. Reorder it.
- Make the INT3 padding detection in optprobe work correctly"
* tag 'x86-urgent-2020-12-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/kprobes: Fix optprobe to detect INT3 padding correctly
x86/apic/vector: Fix ordering in vector assignment
x86/resctrl: Fix incorrect local bandwidth when mba_sc is enabled
x86/mm/mem_encrypt: Fix definition of PMD_FLAGS_DEC_WP
membarrier: Execute SYNC_CORE on the calling thread
membarrier: Explicitly sync remote cores when SYNC_CORE is requested
membarrier: Add an actual barrier before rseq_preempt()
x86/membarrier: Get rid of a dubious optimization
|
|
Pull block fixes from Jens Axboe:
"This should be it for 5.10.
Mike and Song looked into the warning case, and thankfully it appears
the fix was pretty trivial - we can just change the md device chunk
type to unsigned int to get rid of it. They cannot currently be < 0,
and nobody is checking for that either.
We're reverting the discard changes as the corruption reports came in
very late, and there's just no time to attempt to deal with it at this
point. Reverting the changes in question is the right call for 5.10"
* tag 'block-5.10-2020-12-12' of git://git.kernel.dk/linux-block:
md: change mddev 'chunk_sectors' from int to unsigned
Revert "md: add md_submit_discard_bio() for submitting discard bio"
Revert "md/raid10: extend r10bio devs to raid disks"
Revert "md/raid10: pull codes that wait for blocked dev into one function"
Revert "md/raid10: improve raid10 discard request"
Revert "md/raid10: improve discard request for far layout"
Revert "dm raid: remove unnecessary discard limits for raid10"
|
|
Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
does not adjust managed pages count when ballooning/un-ballooning and this leads
to incorrect stats being reported, e.g. unexpected 'free' output.
Note, the calculation in post_status() seems to remain correct: ballooned out
pages are never 'available' and we manually add dm->num_pages_ballooned to
'commited'.
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20201202161245.2406143-3-vkuznets@redhat.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
|
|
'alloc_unit' in alloc_balloon_pages() is either '512' for 2M allocations or
'1' for 4k allocations. So
1 << get_order(alloc_unit << PAGE_SHIFT)
equals to 'alloc_unit' and the for loop basically sets all them offline.
Simplify the math to improve the readability.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20201202161245.2406143-2-vkuznets@redhat.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
|
|
On shutdown the driver core calls the bus' shutdown callback also for
unbound devices. A driver's shutdown callback however is only called for
devices bound to this driver. Commit 9c30921fe799 ("driver core:
platform: use bus_type functions") changed the platform bus from driver
callbacks to bus callbacks, so the shutdown function must be prepared to
be called without a driver. Add the corresponding check in the shutdown
function.
Fixes: 9c30921fe799 ("driver core: platform: use bus_type functions")
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20201212235533.247537-1-dmitry.baryshkov@linaro.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
In the !CONFIG_GENERIC_CMOS_UPDATE case the update_persistent_clock64() function
gets defined as a stub in ntp.c - make the prototype in <linux/timekeeping.h>
conditional on CONFIG_GENERIC_CMOS_UPDATE as well.
Fixes: 76e87d96b30b5 ("ntp: Consolidate the RTC update implementation")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
|
|
According to the X.25 documentation, there was a plan to implement
X.25-over-802.2-LLC. It never finished but left various code stubs in the
X.25 code. At this time it is unlikely that it would ever finish so it
may be better to remove those code stubs.
Also change the documentation to make it clear that this is not a ongoing
plan anymore. Change words like "will" to "could", "would", etc.
Cc: Martin Schiller <ms@dev.tdt.de>
Signed-off-by: Xie He <xie.he.0141@gmail.com>
Link: https://lore.kernel.org/r/20201209033346.83742-1-xie.he.0141@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls
make the number of active slab objects including 'sock_inode_cache' type
rapidly and continuously increase. As a result, memory pressure occurs.
In more detail, I made an artificial reproducer that resembles the
workload that we found the problem and reproduce the problem faster. It
merely repeats 'unshare(CLONE_NEWNET)' 50,000 times in a loop. It takes
about 2 minutes. On 40 CPU cores / 70GB DRAM machine, the available
memory continuously reduced in a fast speed (about 120MB per second,
15GB in total within the 2 minutes). Note that the issue don't
reproduce on every machine. On my 6 CPU cores machine, the problem
didn't reproduce.
'cleanup_net()' and 'fqdir_work_fn()' are functions that deallocate the
relevant memory objects. They are asynchronously invoked by the work
queues and internally use 'rcu_barrier()' to ensure safe destructions.
'cleanup_net()' works in a batched maneer in a single thread worker,
while 'fqdir_work_fn()' works for each 'fqdir_exit()' call in the
'system_wq'. Therefore, 'fqdir_work_fn()' called frequently under the
workload and made the contention for 'rcu_barrier()' high. In more
detail, the global mutex, 'rcu_state.barrier_mutex' became the
bottleneck.
This commit avoids such contention by doing the 'rcu_barrier()' and
subsequent lightweight works in a batched manner, as similar to that of
'cleanup_net()'. The fqdir hashtable destruction, which is done before
the 'rcu_barrier()', is still allowed to run in parallel for fast
processing, but this commit makes it to use a dedicated work queue
instead of the 'system_wq', to make sure that the number of threads is
bounded.
Signed-off-by: SeongJae Park <sjpark@amazon.de>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20201211112405.31158-1-sjpark@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
If interrupt trigger is not set when requesting the interrupt, the core
will take care of reading trigger type from Devicetree. There is no
point to do it in the driver.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Link: https://lore.kernel.org/r/20201210211824.214949-1-krzk@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
MT7530 has a global RX length register, so we are actually changing its
MRU.
Enable MTU normalization for this reason.
Signed-off-by: DENG Qingfang <dqfext@gmail.com>
Acked-by: Landen Chao <landen.chao@mediatek.com>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Link: https://lore.kernel.org/r/20201210170322.3433-1-dqfext@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"Five small fixes. Four in drivers:
- hisi_sas: fix internal queue timeout
- be2iscsi: revert a prior fix causing problems
- bnx2i: add missing dependency
- storvsc: late arriving revert of a problem fix
and one in the core.
The core one is a minor change to stop paying attention to the busy
count when returning out of resources because there's a race window
where the queue might not restart due to missing returning I/O"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
Revert "scsi: storvsc: Validate length of incoming packet in storvsc_on_channel_callback()"
scsi: hisi_sas: Select a suitable queue for internal I/Os
scsi: core: Fix race between handling STS_RESOURCE and completion
scsi: be2iscsi: Revert "Fix a theoretical leak in beiscsi_create_eqs()"
scsi: bnx2i: Requires MMU
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
Pull i2c fix from Wolfram Sang:
"Bugfix for the AT24 EEPROM driver"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
misc: eeprom: at24: fix NVMEM name with custom AT24 device name
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next
Steffen Klassert says:
====================
pull request (net-next): ipsec-next 2020-12-12
Just one patch this time:
1) Redact the SA keys with kernel lockdown confidentiality.
If enabled, no secret keys are sent to uuserspace.
From Antony Antony.
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next:
xfrm: redact SA secret with lockdown confidentiality
====================
Link: https://lore.kernel.org/r/20201212085737.2101294-1-steffen.klassert@secunet.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for v5.11
Second set of patches for v5.11. iwlwifi gaining support for the new
6 GHz band and rtw88 got a new channel. Lots of new features for mt76
and ath11k now has working suspend for PCI devices. And as always,
smaller fixes and cleanups all over.
Major changes:
rtw88
* add support for channel 144
mt76
* support for more sta interfaces on mt7615/mt7915
* mt7915 encapsulation offload
* performance improvements
* channel noise report on mt7915
* mt7915 testmode support
* mt7915 DBDC support
iwlwifi
* support 6 GHz band
ath11k
* suspend support for QCA6390 PCI devices
* support TXOP duration based RTS threshold
* mesh: add support for 256 bitmap in blockack frames in 11ax
* tag 'wireless-drivers-next-2020-12-12' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next: (197 commits)
ath11k: implement suspend for QCA6390 PCI devices
ath11k: hif: add ce irq enable and disable functions
ath11k: implement WoW enable and wakeup commands
ath11k: set credit_update flag for flow controlled ep only
ath11k: dp: stop rx pktlog before suspend
ath11k: htc: implement suspend handling
ath11k: htc: remove unused struct ath11k_htc_ops
ath11k: pci: read select_window register to ensure write is finished
ath11k: hif: implement suspend and resume functions
ath11k: mhi: hook suspend and resume
ath11k: Fix incorrect tlvs in scan start command
ath11k: pci: disable VDD4BLOW
ath11k: pci: fix L1ss clock unstable problem
ath11k: pci: fix hot reset stability issues
ath11k: put hw to DBS using WMI_PDEV_SET_HW_MODE_CMDID
ath11k: mhi: print a warning if firmware crashed
ath11k: use MHI provided APIs to allocate and free MHI controller
ath10k: add atomic protection for device recovery
ath10k: add option for chip-id based BDF selection
mt76: remove unused variable q
...
====================
Link: https://lore.kernel.org/r/20201212050839.EF50EC433C6@smtp.codeaurora.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|