Age | Commit message (Collapse) | Author |
|
Add KEY_ALLOC_BUILT_IN to convey that a key should have KEY_FLAG_BUILTIN
set rather than setting it after the fact.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
|
|
The comment is out of data. Also point out the performance drawback
of the barrier();__builtin_memcpy(); barrier() followed by another
copy from stack (__u) to lvalue;
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1453757600-11441-1-git-send-email-konrad.wilk@oracle.com
[ Made it a bit more readable. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Implement functions to get id of next existing quota structure in quota
file for quota tree based formats and thus for V2 quota format.
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Add infrastructure for supporting get_nextdqblk() callback for VFS
quotas. Translate the operation into a callback to appropriate
filesystem and consequently to quota format callback.
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Lockdep is initialized at compile time now. Get rid of lockdep_init().
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Krinkin <krinkin.m.u@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Cc: mm-commits@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Mike said:
: CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.e.
: kernel with CONFIG_UBSAN_ALIGNMENT=y fails to load without even any error
: message.
:
: The problem is that ubsan callbacks use spinlocks and might be called
: before lockdep is initialized. Particularly this line in the
: reserve_ebda_region function causes problem:
:
: lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
:
: If i put lockdep_init() before reserve_ebda_region call in
: x86_64_start_reservations kernel loads well.
Fix this ordering issue permanently: change lockdep so that it uses hlists
for the hash tables. Unlike a list_head, an hlist_head is in its
initialized state when it is all-zeroes, so lockdep is ready for operation
immediately upon boot - lockdep_init() need not have run.
The patch will also save some memory.
Probably lockdep_init() and lockdep_initialized can be done away with now.
Suggested-by: Mike Krinkin <krinkin.m.u@gmail.com>
Reported-by: Mike Krinkin <krinkin.m.u@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Cc: mm-commits@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
schedstats is very useful during debugging and performance tuning but it
incurs overhead to calculate the stats. As such, even though it can be
disabled at build time, it is often enabled as the information is useful.
This patch adds a kernel command-line and sysctl tunable to enable or
disable schedstats on demand (when it's built in). It is disabled
by default as someone who knows they need it can also learn to enable
it when necessary.
The benefits are dependent on how scheduler-intensive the workload is.
If it is then the patch reduces the number of cycles spent calculating
the stats with a small benefit from reducing the cache footprint of the
scheduler.
These measurements were taken from a 48-core 2-socket
machine with Xeon(R) E5-2670 v3 cpus although they were also tested on a
single socket machine 8-core machine with Intel i7-3770 processors.
netperf-tcp
4.5.0-rc1 4.5.0-rc1
vanilla nostats-v3r1
Hmean 64 560.45 ( 0.00%) 575.98 ( 2.77%)
Hmean 128 766.66 ( 0.00%) 795.79 ( 3.80%)
Hmean 256 950.51 ( 0.00%) 981.50 ( 3.26%)
Hmean 1024 1433.25 ( 0.00%) 1466.51 ( 2.32%)
Hmean 2048 2810.54 ( 0.00%) 2879.75 ( 2.46%)
Hmean 3312 4618.18 ( 0.00%) 4682.09 ( 1.38%)
Hmean 4096 5306.42 ( 0.00%) 5346.39 ( 0.75%)
Hmean 8192 10581.44 ( 0.00%) 10698.15 ( 1.10%)
Hmean 16384 18857.70 ( 0.00%) 18937.61 ( 0.42%)
Small gains here, UDP_STREAM showed nothing intresting and neither did
the TCP_RR tests. The gains on the 8-core machine were very similar.
tbench4
4.5.0-rc1 4.5.0-rc1
vanilla nostats-v3r1
Hmean mb/sec-1 500.85 ( 0.00%) 522.43 ( 4.31%)
Hmean mb/sec-2 984.66 ( 0.00%) 1018.19 ( 3.41%)
Hmean mb/sec-4 1827.91 ( 0.00%) 1847.78 ( 1.09%)
Hmean mb/sec-8 3561.36 ( 0.00%) 3611.28 ( 1.40%)
Hmean mb/sec-16 5824.52 ( 0.00%) 5929.03 ( 1.79%)
Hmean mb/sec-32 10943.10 ( 0.00%) 10802.83 ( -1.28%)
Hmean mb/sec-64 15950.81 ( 0.00%) 16211.31 ( 1.63%)
Hmean mb/sec-128 15302.17 ( 0.00%) 15445.11 ( 0.93%)
Hmean mb/sec-256 14866.18 ( 0.00%) 15088.73 ( 1.50%)
Hmean mb/sec-512 15223.31 ( 0.00%) 15373.69 ( 0.99%)
Hmean mb/sec-1024 14574.25 ( 0.00%) 14598.02 ( 0.16%)
Hmean mb/sec-2048 13569.02 ( 0.00%) 13733.86 ( 1.21%)
Hmean mb/sec-3072 12865.98 ( 0.00%) 13209.23 ( 2.67%)
Small gains of 2-4% at low thread counts and otherwise flat. The
gains on the 8-core machine were slightly different
tbench4 on 8-core i7-3770 single socket machine
Hmean mb/sec-1 442.59 ( 0.00%) 448.73 ( 1.39%)
Hmean mb/sec-2 796.68 ( 0.00%) 794.39 ( -0.29%)
Hmean mb/sec-4 1322.52 ( 0.00%) 1343.66 ( 1.60%)
Hmean mb/sec-8 2611.65 ( 0.00%) 2694.86 ( 3.19%)
Hmean mb/sec-16 2537.07 ( 0.00%) 2609.34 ( 2.85%)
Hmean mb/sec-32 2506.02 ( 0.00%) 2578.18 ( 2.88%)
Hmean mb/sec-64 2511.06 ( 0.00%) 2569.16 ( 2.31%)
Hmean mb/sec-128 2313.38 ( 0.00%) 2395.50 ( 3.55%)
Hmean mb/sec-256 2110.04 ( 0.00%) 2177.45 ( 3.19%)
Hmean mb/sec-512 2072.51 ( 0.00%) 2053.97 ( -0.89%)
In constract, this shows a relatively steady 2-3% gain at higher thread
counts. Due to the nature of the patch and the type of workload, it's
not a surprise that the result will depend on the CPU used.
hackbench-pipes
4.5.0-rc1 4.5.0-rc1
vanilla nostats-v3r1
Amean 1 0.0637 ( 0.00%) 0.0660 ( -3.59%)
Amean 4 0.1229 ( 0.00%) 0.1181 ( 3.84%)
Amean 7 0.1921 ( 0.00%) 0.1911 ( 0.52%)
Amean 12 0.3117 ( 0.00%) 0.2923 ( 6.23%)
Amean 21 0.4050 ( 0.00%) 0.3899 ( 3.74%)
Amean 30 0.4586 ( 0.00%) 0.4433 ( 3.33%)
Amean 48 0.5910 ( 0.00%) 0.5694 ( 3.65%)
Amean 79 0.8663 ( 0.00%) 0.8626 ( 0.43%)
Amean 110 1.1543 ( 0.00%) 1.1517 ( 0.22%)
Amean 141 1.4457 ( 0.00%) 1.4290 ( 1.16%)
Amean 172 1.7090 ( 0.00%) 1.6924 ( 0.97%)
Amean 192 1.9126 ( 0.00%) 1.9089 ( 0.19%)
Some small gains and losses and while the variance data is not included,
it's close to the noise. The UMA machine did not show anything particularly
different
pipetest
4.5.0-rc1 4.5.0-rc1
vanilla nostats-v2r2
Min Time 4.13 ( 0.00%) 3.99 ( 3.39%)
1st-qrtle Time 4.38 ( 0.00%) 4.27 ( 2.51%)
2nd-qrtle Time 4.46 ( 0.00%) 4.39 ( 1.57%)
3rd-qrtle Time 4.56 ( 0.00%) 4.51 ( 1.10%)
Max-90% Time 4.67 ( 0.00%) 4.60 ( 1.50%)
Max-93% Time 4.71 ( 0.00%) 4.65 ( 1.27%)
Max-95% Time 4.74 ( 0.00%) 4.71 ( 0.63%)
Max-99% Time 4.88 ( 0.00%) 4.79 ( 1.84%)
Max Time 4.93 ( 0.00%) 4.83 ( 2.03%)
Mean Time 4.48 ( 0.00%) 4.39 ( 1.91%)
Best99%Mean Time 4.47 ( 0.00%) 4.39 ( 1.91%)
Best95%Mean Time 4.46 ( 0.00%) 4.38 ( 1.93%)
Best90%Mean Time 4.45 ( 0.00%) 4.36 ( 1.98%)
Best50%Mean Time 4.36 ( 0.00%) 4.25 ( 2.49%)
Best10%Mean Time 4.23 ( 0.00%) 4.10 ( 3.13%)
Best5%Mean Time 4.19 ( 0.00%) 4.06 ( 3.20%)
Best1%Mean Time 4.13 ( 0.00%) 4.00 ( 3.39%)
Small improvement and similar gains were seen on the UMA machine.
The gain is small but it stands to reason that doing less work in the
scheduler is a good thing. The downside is that the lack of schedstats and
tracepoints may be surprising to experts doing performance analysis until
they find the existence of the schedstats= parameter or schedstats sysctl.
It will be automatically activated for latencytop and sleep profiling to
alleviate the problem. For tracepoints, there is a simple warning as it's
not safe to activate schedstats in the context when it's known the tracepoint
may be wanted but is unavailable.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <mgalbraith@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1454663316-22048-1-git-send-email-mgorman@techsingularity.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Centralize gmux' ACPI HID in a header file and add apple_gmux_present().
This can be used by other drivers to activate quirks specific to dual
GPU MacBook Pros & Mac Pros. The alternative would be to hardcode DMI
or PCI IDs and amend them whenever Apple introduces a new machine.
Tested-by: Lukas Wunner <lukas@wunner.de>
[MBP 9,1 2012 intel IVB + nvidia GK107 pre-retina 15"]
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/89c23769058a340e5e11d4a7102f3793d3b0c94c.1452525860.git.lukas@wunner.de
|
|
Originally by Seth Forshee <seth.forshee@canonical.com>, 2012-10-04:
During graphics driver initialization it's useful to be able to mux
only the DDC to the inactive client in order to read the EDID. Add
a switch_ddc callback to allow capable handlers to provide this
functionality, and add vga_switcheroo_switch_ddc() to allow DRM
to mux only the DDC.
Modified by Dave Airlie <airlied@gmail.com>, 2012-12-22:
I can't figure out why I didn't like this, but I rewrote this [...]
to lock/unlock the ddc lines [...]. I think I'd prefer something
like that otherwise the interface got really ugly.
Modified by Lukas Wunner <lukas@wunner.de>, 2015-04 - 2015-10:
Change semantics of ->switch_ddc handler callback to return previous
DDC owner. Original version tried to determine previous DDC owner
with find_active_client() but this fails if the inactive client
registers before the active client.
Don't lock vgasr_mutex in _lock_ddc() / _unlock_ddc(), it can cause
deadlocks because (a) during switch (with vgasr_mutex already held),
GPU is woken and probes its outputs, tries to re-acquire vgasr_mutex
to lock DDC lines; (b) Likewise during switch, GPU is suspended and
calls cancel_delayed_work_sync() to stop output polling, if poll
task is running at this moment we may wait forever for it to finish.
Instead, lock mux_hw_lock when unregistering the handler because
the only reason why we'd want to lock vgasr_mutex in _lock_ddc() /
_unlock_ddc() is to block the handler from disappearing while DDC
lines are switched.
Also acquire mux_hw_lock in stage2 to avoid race condition where
reading the EDID and switching happens simultaneously. Likewise on
MIGD / MDIS commands and on runtime suspend.
v2.1: Overhaul locking, squash commits (Daniel Vetter)
v2.2: Readability improvements (Thierry Reding)
v2.3: Overhaul locking once more
v2.4: Retain semantics of ->switchto handler callback to switch all
pins, including DDC (Daniel Vetter)
v5: Rename ddc_lock to mux_hw_lock: Since we acquire this both
when calling ->switch_ddc and ->switchto, it protects not just
access to the DDC lines but to the mux in general. This is in
line with the DRM convention to use low-level locks to avoid
concurrent hw access (e.g. i2c, dp_aux) which are often called
hw_lock (Daniel Vetter)
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=88861
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=61115
Tested-by: Lukas Wunner <lukas@wunner.de>
[MBP 9,1 2012 intel IVB + nvidia GK107 pre-retina 15"]
Cc: Seth Forshee <seth.forshee@canonical.com>
Cc: Dave Airlie <airlied@gmail.com>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/e81ae9722b84c5ed591805fee3ea6dbf5dc6c4b3.1452525860.git.lukas@wunner.de
|
|
Allow handlers to declare their capabilities and allow clients to
obtain that information. So far we have these use cases:
* If the handler is able to switch DDC separately, clients need to
probe EDID with drm_get_edid_switcheroo(). We should allow them
to detect a capable handler to ensure this function only gets
called when needed.
* Likewise if the handler is unable to switch AUX separately, the active
client needs to communicate link training parameters to the inactive
client, which may then skip the AUX handshake and set up its output
with these pre-calibrated values (DisplayPort specification v1.1a,
section 2.5.3.3). Clients need a way to recognize such a situation.
The flags for the radeon_atpx_handler and amdgpu_atpx_handler are
initially set to 0, this can later on be amended with
handler_flags |= VGA_SWITCHEROO_CAN_SWITCH_DDC;
when a ->switch_ddc callback is added.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=88861
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=61115
Tested-by: Lukas Wunner <lukas@wunner.de>
[MBP 9,1 2012 intel IVB + nvidia GK107 pre-retina 15"]
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/2b0d93ed6e511ca09e95e45e0b35627f330fabce.1452525860.git.lukas@wunner.de
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs into for_next
|
|
GPIO chips have been around for years, but were never real devices,
instead they were piggy-backing on a parent device (such as a
platform_device or amba_device) but this was always optional.
GPIO chips could also exist without any device at all, with its
struct device *parent (ex *dev) pointer being set to null.
When sysfs was in use, a mock device would be created, with the
optional parent assigned, or just floating orphaned with NULL
as parent.
If sysfs is active, it will use this device as parent.
We now create a gpio_device struct containing a real
struct device and move the subsystem over to using that. The
list of struct gpio_chip:s is augmented to hold struct
gpio_device:s and we find gpio_chips:s by first looking up
the struct gpio_device.
The struct gpio_device is designed to stay around even if the
gpio_chip is removed, so as to satisfy users in userspace
that need a backing data structure to hold the state of the
session initiated with e.g. a character device even if there is
no physical chip anymore.
From this point on, gpiochips are devices.
Cc: Johan Hovold <johan@kernel.org>
Cc: Michael Welling <mwelling@ieee.org>
Cc: Markus Pargmann <mpa@pengutronix.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Devices may have limits on the number of fragments in an skb they support.
Current codebase uses a constant as maximum for number of fragments one
skb can hold and use.
When enabling scatter/gather and running traffic with many small messages
the codebase uses the maximum number of fragments and may thereby violate
the max for certain devices.
The patch introduces a global variable as max number of fragments.
Signed-off-by: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch removes range-based information used for optimizations in
begin_cpu_access and end_cpu_access.
We don't have any user nor implementation using range-based flush. It seems a
consensus that if we ever want something like that again (or even more robust
using 2D, 3D sub-range regions) we can use the upcoming dma-buf sync ioctl for
such.
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Tiago Vignatti <tiago.vignatti@intel.com>
Reviewed-by: Stéphane Marchesin <marcheu@chromium.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1450820214-12509-3-git-send-email-tiago.vignatti@intel.com
|
|
This patch add max_burst to dma_get_slave_caps for clients
to get the burst capability of slave dma controller.
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Add a helper function devm_add_action_or_reset() which will internally
call devm_add_action(). But if devm_add_action() fails then it will
execute the action mentioned and return the error code.
Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
See http: //www.infradead.org/rpr.html
X-Evolution-Source: 1451162204.2173.11@leira.trondhjem.org
Content-Transfer-Encoding: 8bit
Mime-Version: 1.0
We support OFFSET_MAX just fine, so don't round down below it. Also
switch to using min_t to make the helper more readable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Fixes: 433c92379d9c ("NFS: Clean up nfs_size_to_loff_t()")
Cc: stable@vger.kernel.org # 2.6.23+
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
|
|
mlock *must* be used by core and drivers to protect access
to devices state changes.
Signed-off-by: Daniel Baluta <daniel.baluta@intel.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
The default __UNIQUE_ID macro in compiler.h fails to work for some drivers:
drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c:615:1: error: redefinition of
'__UNIQUE_ID_firmware615'
BRCMF_FW_NVRAM_DEF(4354, "brcmfmac4354-sdio.bin", "brcmfmac4354-sdio.txt");
This adds a copy of the version we use for gcc-4.3 and higher, as the same
one works with all versions of clang that I could find in svn (2.6 and higher).
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michal Marek <mmarek@suse.com>
|
|
Fix kernel-doc warning for missing struct field notation.
..//include/linux/spi/spi.h:540: warning: No description found for parameter 'max_transfer_size'
[Meaningful subject -- broonie]
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
In order to support extended timings parameters on hardware supporting the
"AAD" mode like the AM335x or DM816x, add these entries into the GPMC driver
if the hardware is capable.
Tested on DM816x and AM335x.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Roger Quadros <rogerq@ti.com>
|
|
Add stub for debugfs_create_automount() for when debugfs is not configured
in.
Signed-off-by: Jiaxing Wang <hello.wjx@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Other than plainly parsing the device tree there is no way to
know which CPU a tracer is affined to. As such adding an
interface to lookup the CPU field enclosed in the etm_drvdata
structure that was initialised at boot time.
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This change restricts the reading and setting of the head and tail
pointers on 32bit X86 to 32bit for both correctness and
performance reasons. On uniprocessor X86_32, the atomic64_read
may be implemented as a non-locked cmpxchg8b. This may result in
updates to the pointers done by the VMCI device being overwritten.
On MP systems, there is no such correctness issue, but using 32bit
atomics avoids the overhead of the locked 64bit operation. All this
is safe because the queue size on 32bit systems will never exceed
a 32bit value.
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Jorgen Hansen <jhansen@vmware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
On the channel send side, many of the VMBUS
device drivers explicity serialize access to the
outgoing ring buffer. Give more control to the
VMBUS device drivers in terms how to serialize
accesss to the outgoing ring buffer.
The default behavior will be to aquire the
ring lock to preserve the current behavior.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The hvsock driver needs this API to release all the resources related
to the channel.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This will be used by the coming hv_sock driver.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Only the coming hv_sock driver has a "true" value for this flag.
We treat the hvsock offers/channels as special VMBus devices.
Since the hv_sock driver handles all the hvsock offers/channels, we need to
tweak vmbus_match() for hv_sock driver, so we introduce this flag.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
A function to send the type of message is also added.
The coming net/hvsock driver will use this function to proactively request
the host to offer a VMBus channel for a new hvsock connection.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
A helper function is also added.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This will be used by the coming net/hvsock driver.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Add vendor and device attributes to VMBUS devices. These will be used
by Hyper-V tools as well user-level RDMA libraries that will use the
vendor/device tuple to discover the RDMA device.
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This way we can pass back errors to the file system, and allow for
cleanup required for all direct I/O invocations.
Also allow the ->end_io handlers to return errors on their own, so that
I/O completion errors can be passed on to the callers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
|
|
FSL-MC is a bus type different from PCI and platform, so it needs
its own member in the msi_desc's union.
Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Since an FSL-MC bus is a new bus type that is neither PCI nor
PLATFORM, we need a new domain bus token to disambiguate the
IRQ domain for FSL-MC MSIs.
Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
struct sync_pt was just wrapping around struct fence and creating an
extra abstraction layer. The only two members of struct sync_pt, child_list
and active_list, were moved to struct fence in an earlier commit. After
removing those two members struct sync_pt is nothing more than struct
fence, so remove it all and use struct fence directly.
Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
We want the upstream staging fixes in here as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Q_XGETNEXTQUOTA is exactly like Q_XGETQUOTA, except that it
will return quota information for the id equal to or greater
than the id requested. In other words, if the requested id has
no quota, the command will return quota information for the
next higher id which does have a quota set. If no higher id
has an active quota, -ESRCH is returned.
This allows filesystems to do efficient iteration in kernelspace,
much like extN filesystems do in userspace when asked to report
all active quotas.
The patch adds a d_id field to struct qc_dqblk so that we can
pass back the id of the quota which was found, and return it
to userspace.
Today, filesystems such as XFS require getpwent-style iterations,
and for systems which have i.e. LDAP backends, this can be very
slow, or even impossible if iteration is not allowed in the
configuration.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Dave Chinner <david@fromorbit.com>
|
|
This patch changes the accounting of how many packets are
newly acked or sacked when the sender receives an ACK.
The current approach basically computes
newly_acked_sacked = (prior_packets - prior_sacked) -
(tp->packets_out - tp->sacked_out)
where prior_packets and prior_sacked out are snapshot
at the beginning of the ACK processing.
The new approach tracks the delivery information via a new
TCP state variable "delivered" which monotically increases
as new packets are delivered in order or out-of-order.
The reason for this change is that the current approach is
brittle that produces negative or inaccurate estimate.
1) For non-SACK connections, an ACK that advances the SND.UNA
could reset the DUPACK counters (tp->sacked_out) in
tcp_process_loss() or tcp_fastretrans_alert(). This inflates
the inflight suddenly and causes under-estimate or even
negative estimate. Here is a real example:
before after (processing ACK)
packets_out 75 73
sacked_out 23 0
ca state Loss Open
The old approach computes (75-23) - (73 - 0) = -21 delivered
while the new approach computes 1 delivered since it
considers the 2nd-24th packets are delivered OOO.
2) MSS change would re-count packets_out and sacked_out so
the estimate is in-accurate and can even become negative.
E.g., the inflight is doubled when MSS is halved.
3) Spurious retransmission signaled by DSACK is not accounted
The new approach is simpler and more robust. For SACK connections,
tp->delivered increments as packets are being acked or sacked in
SACK and ACK processing.
For non-sack connections, it's done in tcp_remove_reno_sacks() and
tcp_add_reno_sack(). When an ACK advances the SND.UNA, tp->delivered
is incremented by the number of packets ACKed (less the current
number of DUPACKs received plus one packet hole). Upon receiving
a DUPACK, tp->delivered is incremented assuming one out-of-order
packet is delivered.
Upon receiving a DSACK, tp->delivered is incremtened assuming one
retransmission is delivered in tcp_sacktag_write_queue().
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Considering current pty code and multiple devpts instances, it's possible
to umount a devpts file system while a program still has /dev/tty opened
pointing to a previosuly closed pty pair in that instance. In the case all
ptmx and pts/N files are closed, umount can be done. If the program closes
/dev/tty after umount is done, devpts_kill_index will use now an invalid
super_block, which was already destroyed in the umount operation after
running ->kill_sb. This is another "use after free" type of issue, but now
related to the allocated super_block instance.
To avoid the problem (warning at ida_remove and potential crashes) for
this specific case, I added two functions in devpts which grabs additional
references to the super_block, which pty code now uses so it makes sure
the super block structure is still valid until pty shutdown is done.
I also moved the additional inode references to the same functions, which
also covered similar case with inode being freed before /dev/tty final
close/shutdown.
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
Cc: stable@vger.kernel.org # 2.6.29+
Reviewed-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
These functions either never existed or were only used in
OF_CLK_DECLARE() macros. Remove the dead prototypes.
Cc: Jyri Sarha <jsarha@ti.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
We don't use CLK_IS_ROOT but in a few places in the common clk
framework core. Let's replace those checks with a check for the
number of parents a clk has instead of the flag, freeing up one
flag for something else. We don't remove the flag yet so that
things keep building, but we'll remove it once all drivers have
removed their flag usage.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
DRIVER_NAME is too generic to be used in a driver-specific platform
data file. Use a name specific to the driver instead, to avoid
collisions.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Acked-by: Tony Lindgren <tony@atomide.com>
Cc: Jiri Slaby <jslaby@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Implementation of software emulation of RS485 direction handling is based
on omap_serial driver.
Before and after transmission RTS is set to the appropriate value.
Note that before calling serial8250_em485_init() the caller has to
ensure that UART will interrupt when shift register empty. Otherwise,
emultaion cannot be used.
Both serial8250_em485_init() and serial8250_em485_destroy() are
idempotent functions.
Signed-off-by: Matwey V. Kornilov <matwey@sai.msu.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Cleanup the early DT/earlycon separation; remove the 'addr' parameter
from of_setup_earlycon() and get the uart phys addr directly with a
new wrapper function, of_flat_dt_translate_addr(). Limit
fdt_translate_address() to file scope.
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Read the optional "reg-offset", "reg-shift", "reg-io-width" and endianness
properties and initialize the respective struct uart_port field if found.
NB: These bindings are common to several drivers and the values merely
indicate the default value; the registering earlycon setup() method can
simply override the values if required.
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Pass-through any options string in the 'stdout-path' property to the
earlycon "driver" setup.
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Use the console name embedded in the OF earlycon table by the
OF_EARLYCON_DECLARE() macro to initialize the struct console 'name'
and 'index' fields.
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Use a single common table of struct earlycon_id for both command line
and devicetree. Re-define OF_EARLYCON_DECLARE() macro to instance a
unique earlycon declaration (the declaration is only guaranteed to be
unique within a compilation unit; separate compilation units must still
use unique earlycon names).
The semantics of OF_EARLYCON_DECLARE() is different; it declares an
earlycon which can matched either on the command line or by devicetree.
EARLYCON_DECLARE() is semantically unchanged; it declares an earlycon
which is matched by command line only. Remove redundant instances of
EARLYCON_DECLARE().
This enables all earlycons to properly initialize struct console
with the appropriate name and index, which improves diagnostics and
enables direct earlycon-to-console handoff.
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Now that usb_bus_list has been removed and switched to idr
rename the related mutex accordingly.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|