Age | Commit message (Collapse) | Author |
|
With the internal Quota feature, mke2fs creates empty quota inodes and
quota usage tracking is enabled as soon as the file system is mounted.
Since quotacheck is no longer preallocating all of the blocks in the
quota inode that are likely needed to be written to, we are now seeing
a lockdep false positive caused by needing to allocate a quota block
from inside ext4_map_blocks(), while holding i_data_sem for a data
inode. This results in this complaint:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&ei->i_data_sem);
lock(&s->s_dquot.dqio_mutex);
lock(&ei->i_data_sem);
lock(&s->s_dquot.dqio_mutex);
Google-Bug-Id: 27907753
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
|
|
Commit e9036d066236 ("tty: Drop krefs for interrupted tty lock")
fixed a tty reference counting problem introduced in
commit 0bfd464d3fdd ("tty: Wait interruptibly for tty lock on reopen"),
so v4.5.0 is correct.
However, commit d6203d0c7b73 ("tty: Refactor tty_open()") moved the
relevant code for 4.6-rc1; correct the merge.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
For drm_gem_object_unreference callers are required to hold
dev->struct_mutex, which these paths don't. Enforcing this requirement
has become a bit more strict with
commit ef4c6270bf2867e2f8032e9614d1a8cfc6c71663
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Thu Oct 15 09:36:25 2015 +0200
drm/gem: Check locking in drm_gem_object_unreference
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
1) don't let other threads trying to bang on aux channel interrupt the
defer timeout/logic
2) don't let other threads interrupt the i2c over aux logic
Technically, according to people who actually have the DP spec, this
should not be required. In practice, it makes some troublesome Dell
monitor (and perhaps others) work, so probably a case of "It's compliant
if it works with windows" on the hw vendor's part..
v2: rebased to come before DPCD/AUX logging patch for easier backport
to stable branches.
Reported-by: Dave Wysochanski <dwysocha@redhat.com>
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1274157
Cc: stable@vger.kernel.org
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
https://github.com/markyzq/kernel-drm-rockchip into drm-fixes
bunch of rockchip fixes.
* 'drm-rockchip-next-fixes-2016-03-28' of https://github.com/markyzq/kernel-drm-rockchip:
drm/rockchip: dw_hdmi: Don't call platform_set_drvdata()
drm/rockchip: vop: Fix vop crtc cleanup
drm/rockchip: dw_hdmi: Call drm_encoder_cleanup() in error path
drm/rockchip: vop: Disable planes when disabling CRTC
drm/rockchip: vop: Don't reject empty modesets
drm/rockchip: cancel pending vblanks on close
drm/rockchip: vop: fix crtc size in plane check
|
|
git://people.freedesktop.org/~robclark/linux into drm-fixes
two minor msm fixes.
* 'msm-fixes-4.6-rc1' of git://people.freedesktop.org/~robclark/linux:
drm/msm: fix typo in the !COMMON_CLK case
drm/msm: fix bug after preclose removal
|
|
into drm-fixes
Just a few fixes for 4.6 this week:
- Add some SI DPM quirks
- Improve the ACP Kconfig text
- Additional BO pinning checks
* 'drm-next-4.6' of git://people.freedesktop.org/~agd5f/linux:
drm/amdgpu: Don't move pinned BOs
drm/radeon: Don't move pinned BOs
drm/radeon: add a dpm quirk for all R7 370 parts
drm/radeon: add another R7 370 quirk
drm/radeon: add a dpm quirk for sapphire Dual-X R7 370 2G D5
drm/amd: Beef up ACP Kconfig menu text
|
|
During revalidate we check whether device capacity has changed before we
decide whether to output disk information or not.
The check for old capacity failed to take into account that we scaled
sdkp->capacity based on the reported logical block size. And therefore
the capacity test would always fail for devices with sectors bigger than
512 bytes and we would print several copies of the same discovery
information.
Avoid scaling sdkp->capacity and instead adjust the value on the fly
when setting the block device capacity and generating fake C/H/S
geometry.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Cc: <stable@vger.kernel.org>
Reported-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Hannes Reinicke <hare@suse.de>
Reviewed-by: Ewan Milne <emilne@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
User-Mode Linux supplies an alternate TTY_MAJOR driver for stdio console,
so the noctty check in tty_open() must apply only to VT driver tty0
devnode and not the UML console driver tty0 devnode.
Fixes: 11e1d4aa4da1 ("tty: Consolidate noctty checks in tty_open()")
Reported-by: Richard Weinberger <richard.weinberger@gmail.com>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Telit LE910 V2 is a mobile broadband card with no ARP capabilities:
the patch makes this device to use wwan_noarp_info struct
Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
Reviewed-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Size of the attribute IFLA_PHYS_PORT_NAME was missing.
Fixes: db24a9044ee1 ("net: add support for phys_port_name")
CC: David Ahern <dsahern@gmail.com>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Acked-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Otherwise the DT parsing will default to big endian if nothing is
specified.
Reported-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
This commit converts test duration from minutes to seconds early on
in order to prepare for upcoming OS-jitter-injection changes.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The current hang-check machinery in the rcutorture scripts uses "$!" of
a parenthesized bash statement to capture the pid. Unfortunately, this
captures not qemu's pid, but rather that of its parent that implements
the parenthesized statement. This commit therefore adjusts things so as
to capture qemu's actual pid, which then allows the script to actually
kill qemu in event of a kernel hang.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit clarifies error messages -- you only get to run one torture
test at a time!
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The hotplug notifier rcutorture_cpu_notify() doesn't consider the
corresponding CPU_XXX_FROZEN transitions. They occur on
suspend/resume and are usually handled the same way as the
corresponding non frozen transitions.
Mask the switch case action argument with '~CPU_TASKS_FROZEN' to map
CPU_XXX_FROZEN hotplug transitions on corresponding non-frozen
transitions.
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The current code initializes the global per-CPU variables
rcu_torture_count and rcu_torture_batch to zero. However, C does this
initialization by default, and explicit initialization of per-CPU
variables now needs a different syntax if "make tags" is to work.
This commit therefore removes the initialization.
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
After finishing its tests rcuperf tries to wake up shutdown_wq even if
"shutdown" param is set to false, resulting in a wake_up() call on an
unitialized wait_queue_head_t which leads to "BUG: spinlock bad magic" and
"BUG: unable to handle kernel NULL pointer dereference".
Fix by checking "shutdown" param before waking up the queue.
Signed-off-by: Artem Savkov <artem.savkov@gmail.com>
|
|
This commit adds an rcuperf scenario named TREE54 that uses 54 CPUs and
provides a four-level rcu_node combining tree.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Running rcuperf can result in RCU CPU stall warnings and RT throttling.
These occur because on of the real-time writer processes does
ftrace_dump() while still running at real-time priority. This commit
therefore prevents these problems by setting the writer thread back to
SCHED_NORMAL (AKA SCHED_OTHER) before doing ftrace_dump().
In addition, this commit adds a small fixed delay before dumping ftrace
buffer in order to decrease the probability that this dumping will
interfere with other writers' grace periods.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Boot-time activity can legitimately grab CPUs for extended time periods,
so the commit adds a boot parameter to delay the start of the performance
test until boot has completed. Defaults to 10 seconds.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The rcuperf event-trace data is more accurate than are the rcuperf
printk()s because locking keeps things ordered. This commit therefore
parses and analyzes this event-trace data if present, and falls back on
the printk()s otherwise.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit enables ftrace in the rcuperf TREE kernel build and adds
an ftrace_dump() at the end of rcuperf processing. This data will be
used to measure the actual durations of the expedited grace periods
without the added delays inherent in the kernel-module measurements.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit adds a line giving the number of grace periods, the number
of batches, and the ratio. The larger the ratio, the greater the
batching efficiency.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit forces more deterministic update-side behavior by setting
rcuperf's rcu_perf_writer() kthreads to real-time priority.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit forces more deterministic behavior by binding rcuperf's
rcu_perf_reader() and rcu_perf_writer() kthreads to their respective
CPUs.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit adds documentation for the new rcuperf module's kernel
boot parameters.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit adds a new rcuperf module that carries out simple performance
tests of RCU grace periods.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit provides rcu_exp_batches_completed() and
rcu_exp_batches_completed_sched() functions to allow torture-test modules
to check how many expedited grace period batches have completed.
These are analogous to the existing rcu_batches_completed(),
rcu_batches_completed_bh(), and rcu_batches_completed_sched() functions.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Currently, rcu_torture_writer() checks only for rcu_gp_is_expedited()
when deciding whether or not to do dynamic control of RCU expediting.
This means that if rcupdate.rcu_normal is specified, rcu_torture_writer()
will attempt to dynamically control RCU expediting, but will nonetheless
only test normal RCU grace periods. This commit therefore adds a check
for !rcu_gp_is_normal(), and prints a message and desists from testing
dynamic control of RCU expediting when doing so is futile.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit adds the scripting changes to add support for the shiny
new rcuperf kernel module.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
If it is necessary to kick the grace-period kthread, that is a good
time to dump the trace buffer in order to learn why kicking was needed.
This commit therefore does the dump.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Currently, we have four versions of rcu_read_lock_sched_held(), depending
on the combined choices on PREEMPT_COUNT and DEBUG_LOCK_ALLOC. However,
there is an existing function preemptible() that already distinguishes
between the PREEMPT_COUNT=y and PREEMPT_COUNT=n cases, and allows these
four implementations to be consolidated down to two.
This commit therefore uses preemptible() to achieve this consolidation.
Note that there could be a small performance regression in the case
of CONFIG_DEBUG_LOCK_ALLOC=y && PREEMPT_COUNT=n. However, given the
overhead associated with CONFIG_DEBUG_LOCK_ALLOC=y, this should be
down in the noise.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Recent kernels can fail to awaken the grace-period kthread for
quiescent-state forcing. This commit is a crude hack that does
a wakeup if a scheduling-clock interrupt sees that it has been
too long since force-quiescent-state (FQS) processing.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Currently, the force-quiescent-state (FQS) code in rcu_gp_kthread() can
advance the next FQS even if one was not executed last time. This can
happen due timeout-duration uncertainty. This commit therefore avoids
advancing the FQS schedule unless an FQS was just executed. In the
corner case where an FQS was not executed, but is due now, the code does
a one-jiffy wait.
This change prepares for kthread kicking.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Recent kernels can fail to awaken the grace-period kthread for
quiescent-state forcing. This commit is a crude hack that does
a wakeup any time a stall is detected.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The current expedited grace-period implementation makes subsequent grace
periods wait on wakeups for the prior grace period. This does not fit
the dictionary definition of "expedited", so this commit allows these two
phases to overlap. Doing this requires four waitqueues rather than two
because tasks can now be waiting on the previous, current, and next grace
periods. The fourth waitqueue makes the bit masking work out nicely.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit pulls the grace-period-start counter adjustment and tracing
from synchronize_rcu_expedited() and synchronize_sched_expedited()
into exp_funnel_lock(), thus eliminating some code duplication.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit moves some duplicate code from synchronize_rcu_expedited()
and synchronize_sched_expedited() into rcu_exp_gp_seq_snap(). This
doesn't save lines of code, but does eliminate a "tell me twice" issue.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Currently, synchronize_rcu_expedited() and rcu_sched_expedited() have
significant duplicate code. This commit therefore consolidates some of
this code into rcu_exp_wake(), which is now renamed to rcu_exp_wait_wake()
in recognition of its added responsibilities.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit speeds up the low-contention case, especially for systems
with large rcu_node trees, by attempting to directly acquire the
->exp_mutex. This fastpath checks the leaves and root first in
order to avoid excessive memory contention on the mutex itself.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The current mutex-based funnel-locking approach used by expedited grace
periods is subject to severe unfairness. The problem arises when a
few tasks, making a path from leaves to root, all wake up before other
tasks do. A new task can then follow this path all the way to the root,
which needlessly delays tasks whose grace period is done, but who do
not happen to acquire the lock quickly enough.
This commit avoids this problem by maintaining per-rcu_node wait queues,
along with a per-rcu_node counter that tracks the latest grace period
sought by an earlier task to visit this node. If that grace period
would satisfy the current task, instead of proceeding up the tree,
it waits on the current rcu_node structure using a pair of wait queues
provided for that purpose. This decouples awakening of old tasks from
the arrival of new tasks.
If the wakeups prove to be a bottleneck, additional kthreads can be
brought to bear for that purpose.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Just a name change to save a few lines and a bit of typing.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The cpu_online() function can return values other than 0 and 1, which
can result in subscript overflow when applied to a two-element array.
This commit allows for this behavior by using "!!" on the return value
from cpu_online() when used as a subscript.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Commit #cdacbe1f91264 ("rcu: Add fastpath bypassing funnel locking")
turns out to be a pessimization at high load because it forces a tree
full of tasks to wait for an expedited grace period that they probably
do not need. This commit therefore removes this optimization.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
This commit brings the synchronize_rcu_expedited() function's header
comment into line with the new implementation.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
Although cond_resched_rcu_qs() supplies quiescent states to all flavors
of normal RCU grace periods, it does nothing for expedited RCU-sched
grace periods. This commit therefore adds a check for a need for a
quiescent state from the current CPU by an expedited RCU-sched grace
period, and invokes rcu_sched_qs() to supply that quiescent state if so.
Note that the check is racy in that we might be migrated to some other
CPU just after checking the per-CPU variable. This is OK because the
act of migration will do a context switch, which will supply the needed
quiescent state. The only downside is that we might do an unnecessary
call to rcu_sched_qs(), but the probability is low and the overhead
is small.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|