Age | Commit message (Collapse) | Author |
|
yield_to() is supposed to return -ESRCH if there is no task to
yield to, but because the type is bool that is the same as returning
true.
The only place I see which cares is kvm_vcpu_on_spin().
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Raghavendra <raghavendra.kt@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Link: http://lkml.kernel.org/r/20140523102042.GA7267@mwanda
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
To be future-proof and for better readability the time comparisons are modified
to use time_after() instead of plain, error-prone math.
Signed-off-by: Manuel Schölling <manuel.schoelling@gmx.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1400780723-24626-1-git-send-email-manuel.schoelling@gmx.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The current no_hz idle load balancer do load balancing for *all* idle cpus,
even though the time due to load balance for a particular
idle cpu could be still a while in the future. This introduces a much
higher load balancing rate than what is necessary. The patch
changes the behavior by only doing idle load balancing on
behalf of an idle cpu only when it is due for load balancing.
On SGI's systems with over 3000 cores, the cpu responsible for idle balancing
got overwhelmed with idle balancing, and introduces a lot of OS noise
to workloads. This patch fixes the issue.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Acked-by: Russ Anderson <rja@sgi.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Jason Low <jason.low2@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Len Brown <len.brown@intel.com>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Hedi Berriche <hedi@sgi.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: MichelLespinasse <walken@google.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1400621967.2970.280.camel@schen9-DESK
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
sched_cfs_period_timer() reads cfs_b->period without locks before calling
do_sched_cfs_period_timer(), and similarly unthrottle_offline_cfs_rqs()
would read cfs_b->period without the right lock. Thus a simultaneous
change of bandwidth could cause corruption on any platform where ktime_t
or u64 writes/reads are not atomic.
Extend cfs_b->lock from do_sched_cfs_period_timer() to include the read of
cfs_b->period to solve that issue; unthrottle_offline_cfs_rqs() can just
use 1 rather than the exact quota, much like distribute_cfs_runtime()
does.
There is also an unlocked read of cfs_b->runtime_expires, but a race
there would only delay runtime expiry by a tick. Still, the comparison
should just be != anyway, which clarifies even that problem.
Signed-off-by: Ben Segall <bsegall@google.com>
Tested-by: Roman Gushchin <klamm@yandex-team.ru>
[peterz: Fix compile warn]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20140519224945.20303.93530.stgit@sword-of-the-dawn.mtv.corp.google.com
Cc: pjt@google.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
tg_set_cfs_bandwidth() sets cfs_b->timer_active to 0 to
force the period timer restart. It's not safe, because
can lead to deadlock, described in commit 927b54fccbf0:
"__start_cfs_bandwidth calls hrtimer_cancel while holding rq->lock,
waiting for the hrtimer to finish. However, if sched_cfs_period_timer
runs for another loop iteration, the hrtimer can attempt to take
rq->lock, resulting in deadlock."
Three CPUs must be involved:
CPU0 CPU1 CPU2
take rq->lock period timer fired
... take cfs_b lock
... ... tg_set_cfs_bandwidth()
throttle_cfs_rq() release cfs_b lock take cfs_b lock
... distribute_cfs_runtime() timer_active = 0
take cfs_b->lock wait for rq->lock ...
__start_cfs_bandwidth()
{wait for timer callback
break if timer_active == 1}
So, CPU0 and CPU1 are deadlocked.
Instead of resetting cfs_b->timer_active, tg_set_cfs_bandwidth can
wait for period timer callbacks (ignoring cfs_b->timer_active) and
restart the timer explicitly.
Signed-off-by: Roman Gushchin <klamm@yandex-team.ru>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/87wqdi9g8e.wl\%klamm@yandex-team.ru
Cc: pjt@google.com
Cc: chris.j.arges@canonical.com
Cc: gregkh@linuxfoundation.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Throttled task is still on rq, and it may be moved to other cpu
if user is playing with sched_setaffinity(). Therefore, unlocked
task_rq() access makes the race.
Juri Lelli reports he got this race when dl_bandwidth_enabled()
was not set.
Other thing, pointed by Peter Zijlstra:
"Now I suppose the problem can still actually happen when
you change the root domain and trigger a effective affinity
change that way".
To fix that we do the same as made in __task_rq_lock(). We do not
use __task_rq_lock() itself, because it has a useful lockdep check,
which is not correct in case of dl_task_timer(). We do not need
pi_lock locked here. This case is an exception (PeterZ):
"The only reason we don't strictly need ->pi_lock now is because
we're guaranteed to have p->state == TASK_RUNNING here and are
thus free of ttwu races".
Signed-off-by: Kirill Tkhai <tkhai@yandex.ru>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: <stable@vger.kernel.org> # v3.14+
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/3056991400578422@web14g.yandex.ru
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The rx_errors (GLV_REPC) and rx_missed (GLV_RMPC) were removed
from the chip design.
Change-ID: Ifdeb69c90feac64ec95c36d3d32c75e3a06de3b7
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Pull the PF stat collection out of the VSI collection routine, and
add a unifying stats update routine to call the various stat collection
routines.
Change-ID: I224192455bb3a6e5dc0a426935e67dffc123e306
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
This change moves some common code in two places into a
small helper function, and corrects a bug in one of the
two places in the process.
Change-ID: If3bba7152b240f13a7881eb0e8a781655fa66ce7
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Structure for VEB context added. Update macro for
transition from ms to GTIME (us) time units.
Change-ID: Ib3a19587b4cf355348655df8f60c6f37bb1497a3
Signed-off-by: Kamil Krawczyk <kamil.krawczyk@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Currently, the PF driver only notifies the VFs for PF reset events.
Instead, notify the VFs for all types of resets, so they can attempt a
graceful reinit.
Change-ID: I03eb7afde25727198ef620f8b4e78bb667a11370
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
i40evf_set_ringparam was broken in several ways. First, it only changed
the size of the first ring, and second, changing the ring size would
often result in a panic because it would change the count before
deallocating resources, causing the driver to either free nonexistent
buffers, or leak leftover buffers.
Fix this by storing the descriptor count in the adapter structure, and
updating the count for each ring each time we allocate them. This
ensures that we always free the right size ring, and always end up with
the requested count when the device is (re)opened.
Change-ID: I298396cd3d452ba8509d9f2d33a93f25868a9a55
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
attr.sched_policy is u32, therefore a comparison against < 0 is never true.
Fix this by casting sched_policy to int.
This issue was reported by coverity CID 1219934.
Fixes: dbdb22754fde ("sched: Disallow sched_attr::sched_policy < 0")
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1401741514-7045-1-git-send-email-richard@nod.at
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
As Peter Zijlstra told me, we have the following path:
do_exit()
exit_itimers()
itimer_delete()
spin_lock_irqsave(&timer->it_lock, &flags);
timer_delete_hook(timer);
kc->timer_del(timer) := posix_cpu_timer_del()
put_task_struct()
__put_task_struct()
task_numa_free()
spin_lock(&grp->lock);
Which means that task_numa_free() can be called with interrupts
disabled, which means that we should not be using spin_lock_irq() but
spin_lock_irqsave() instead. Otherwise we are enabling interrupts while
holding an interrupt unsafe lock!
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner<tglx@linutronix.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20140527182541.GH11096@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Hardware requires descriptors to be allocated in groups of 32.
Change-ID: I752ccc96769d1bd8d3018c004b8aeff464045bf2
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
If the GPIO for the backlight is on an I2C chip, we currently
get nasty warnings like this during the boot:
WARNING: CPU: 0 PID: 6 at drivers/gpio/gpiolib.c:2364 gpiod_set_raw_value+0x40/0x4c()
Modules linked in:
CPU: 0 PID: 6 Comm: kworker/u2:0 Not tainted 3.15.0-rc4-12393-gcde9f4e #400
Workqueue: deferwq deferred_probe_work_func
[<c0014cbc>] (unwind_backtrace) from [<c001191c>] (show_stack+0x10/0x14)
[<c001191c>] (show_stack) from [<c0566ae0>] (dump_stack+0x80/0x9c)
[<c0566ae0>] (dump_stack) from [<c003f61c>] (warn_slowpath_common+0x68/0x8c)
[<c003f61c>] (warn_slowpath_common) from [<c003f65c>] (warn_slowpath_null+0x1c/0x24)
[<c003f65c>] (warn_slowpath_null) from [<c02f7e10>] (gpiod_set_raw_value+0x40/0x4c)
[<c02f7e10>] (gpiod_set_raw_value) from [<c0308fbc>] (gpio_backlight_update_status+0x4c/0x74)
[<c0308fbc>] (gpio_backlight_update_status) from [<c030914c>] (gpio_backlight_probe+0x168/0x254)
[<c030914c>] (gpio_backlight_probe) from [<c0378fa8>] (platform_drv_probe+0x18/0x48)
[<c0378fa8>] (platform_drv_probe) from [<c0377c40>] (driver_probe_device+0x10c/0x238)
[<c0377c40>] (driver_probe_device) from [<c0376330>] (bus_for_each_drv+0x44/0x8c)
[<c0376330>] (bus_for_each_drv) from [<c0377afc>] (device_attach+0x74/0x8c)
[<c0377afc>] (device_attach) from [<c03771c4>] (bus_probe_device+0x88/0xb0)
[<c03771c4>] (bus_probe_device) from [<c03775c8>] (deferred_probe_work_func+0x64/0x94)
[<c03775c8>] (deferred_probe_work_func) from [<c00572e8>] (process_one_work+0x1b4/0x4bc)
[<c00572e8>] (process_one_work) from [<c00579d0>] (worker_thread+0x11c/0x398)
[<c00579d0>] (worker_thread) from [<c005dfd8>] (kthread+0xc8/0xe4)
[<c005dfd8>] (kthread) from [<c000e768>] (ret_from_fork+0x14/0x2c)
Fix this by using gpio_set_value_cansleep() as suggested in
drivers/gpio/gpiolib.c:2364. This is what the other backlight drivers
are also doing.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Acked-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
The driver was allowing the user to set larger size MTU
than the hardware was being configured to support.
The driver was already using VLAN_HLEN when setting the
hardware max receivable frame size, so just add it to the
netdev MTU set entry point as well.
Change-ID: Ie20e2a35d04f8c411253e255bea79ca69aaeaea3
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Remove unused defines and macros for RX_LRO.
Change-ID: I8ca6715edfa62b56837417a1c4ff68c2345dab6e
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Tested-by: Sibai Li <sibai.li@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf into perf/core
Pull perf/core improvements and fixes from Jiri Olsa:
* Warn the user when trace command is not available (Arnaldo Carvalho de Melo)
* Add warning when disabling perl scripting support due to missing devel files (Arnaldo Carvalho de Melo)
* Consider header files outside perf directory in tags target (Sebastian Andrzej Siewior)
* Allow overriding sysfs and proc finding with env var (Cody P Schafer)
* Fix "==" into "=" in ui_browser__warning assignment (zhangdianfang)
* Factor elide bool handling in sort code (Jiri Olsa)
* Fix poll return value propagation (Jiri Olsa)
* Fix 'make help' message error (Jianyu Zhan)
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
WARNING: line over 80 characters
#205: FILE: kernel/locking/rwsem-xadd.c:275:
+ old = cmpxchg(&sem->count, count, count + RWSEM_ACTIVE_WRITE_BIAS);
WARNING: line over 80 characters
#376: FILE: kernel/locking/rwsem-xadd.c:434:
+ * If there were already threads queued before us and there are no
WARNING: line over 80 characters
#377: FILE: kernel/locking/rwsem-xadd.c:435:
+ * active writers, the lock must be read owned; so we try to wake
total: 0 errors, 3 warnings, 417 lines checked
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/n/tip-pn6pslaplw031lykweojsn8c@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Optimistic spinning is only used by the xadd variant
of rw-semaphores. Make sure that we use the old version
of the __RWSEM_INITIALIZER macro for systems that rely
on the spinlock one, otherwise warnings can be triggered,
such as the following reported on an arm box:
ipc/ipcns_notifier.c:22:8: warning: excess elements in struct initializer [enabled by default]
ipc/ipcns_notifier.c:22:8: warning: (near initialization for 'ipcns_chain.rwsem') [enabled by default]
ipc/ipcns_notifier.c:22:8: warning: excess elements in struct initializer [enabled by default]
ipc/ipcns_notifier.c:22:8: warning: (near initialization for 'ipcns_chain.rwsem') [enabled by default]
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Alex Shi <alex.shi@linaro.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fusionio.com>
Link: http://lkml.kernel.org/r/1400545677.6399.10.camel@buesod1.americas.hpqcorp.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
We have reached the point where our mutexes are quite fine tuned
for a number of situations. This includes the use of heuristics
and optimistic spinning, based on MCS locking techniques.
Exclusive ownership of read-write semaphores are, conceptually,
just about the same as mutexes, making them close cousins. To
this end we need to make them both perform similarly, and
right now, rwsems are simply not up to it. This was discovered
by both reverting commit 4fc3f1d6 (mm/rmap, migration: Make
rmap_walk_anon() and try_to_unmap_anon() more scalable) and
similarly, converting some other mutexes (ie: i_mmap_mutex) to
rwsems. This creates a situation where users have to choose
between a rwsem and mutex taking into account this important
performance difference. Specifically, biggest difference between
both locks is when we fail to acquire a mutex in the fastpath,
optimistic spinning comes in to play and we can avoid a large
amount of unnecessary sleeping and overhead of moving tasks in
and out of wait queue. Rwsems do not have such logic.
This patch, based on the work from Tim Chen and I, adds support
for write-side optimistic spinning when the lock is contended.
It also includes support for the recently added cancelable MCS
locking for adaptive spinning. Note that is is only applicable
to the xadd method, and the spinlock rwsem variant remains intact.
Allowing optimistic spinning before putting the writer on the wait
queue reduces wait queue contention and provided greater chance
for the rwsem to get acquired. With these changes, rwsem is on par
with mutex. The performance benefits can be seen on a number of
workloads. For instance, on a 8 socket, 80 core 64bit Westmere box,
aim7 shows the following improvements in throughput:
+--------------+---------------------+-----------------+
| Workload | throughput-increase | number of users |
+--------------+---------------------+-----------------+
| alltests | 20% | >1000 |
| custom | 27%, 60% | 10-100, >1000 |
| high_systime | 36%, 30% | >100, >1000 |
| shared | 58%, 29% | 10-100, >1000 |
+--------------+---------------------+-----------------+
There was also improvement on smaller systems, such as a quad-core
x86-64 laptop running a 30Gb PostgreSQL (pgbench) workload for up
to +60% in throughput for over 50 clients. Additionally, benefits
were also noticed in exim (mail server) workloads. Furthermore, no
performance regression have been seen at all.
Based-on-work-from: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
[peterz: rej fixup due to comment patches, sched/rt.h header]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Alex Shi <alex.shi@linaro.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "Scott J Norton" <scott.norton@hp.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <jbacik@fusionio.com>
Link: http://lkml.kernel.org/r/1399055055.6275.15.camel@buesod1.americas.hpqcorp.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
We introduced this check in case this structure changed in the future,
the AQ definition is now mature enough that this check is no longer necessary.
Change-ID: Ic66321d0a08557dc9d8cb84029185352cb534330
Signed-off-by: Kamil Krawczyk <kamil.krawczyk@intel.com>
Tested-by: Jim Young <jamesx.m.young@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Register range, being subject to register diagnostic, can vary among
different NVMs. We will try to identify the full range and use it for
a register test. This is needed to avoid false test results. If we fail
to define the proper register range we will test only the first register
from that group.
Change-ID: Ieee7173c719733b61d3733177a94dc557eb7b3fd
Signed-off-by: Kamil Krawczyk <kamil.krawczyk@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
A couple of FD checks ended up using bitwise OR to check
a value, which ends up always being evaluated to true.
This should fix the issue. Thanks to DaveJ for noticing
and reporting the issue!
CC: Dave Jones <davej@redhat.com>
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf into perf/urgent
Pull perf/urgent fixes from Jiri Olsa:
* Fix perf probe to find correct variable DIE (Masami Hiramatsu)
* Fix a segfault in perf probe if asked for variable it doesn't find (Masami Hiramatsu)
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It is available since v3.15-rc5.
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A chunk aligned read increases counter active_aligned_reads and
decreases it after sub-device handle it successfully. But when a read
error occurs, the read redispatched by raid5d, and the
active_aligned_reads will not be decreased until we can grab a stripe
head in retry_aligned_read. Now suppose, a barrier io comes, set
conf->quiesce to 2, and wait until both active_stripes and
active_aligned_reads are zero. The retried chunk aligned read gets
stuck at get_active_stripe waiting until conf->quiesce becomes 0.
Retry_aligned_read and barrier io are waiting each other now.
One possible solution is that we ignore conf->quiesce, let the retried
aligned read finish. I reproduced this deadlock and test this patch on
centos6.0
Signed-off-by: NeilBrown <neilb@suse.de>
|
|
According to RFC1035 "[...] the total length of a domain name (i.e.,
label octets and label length octets) is restricted to 255 octets or
less."
Signed-off-by: Manuel Schölling <manuel.schoelling@gmx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Daniel requested in the bug that I use a 3GB fallback size. Since this
is not in the spec as a valid size, I decided against it. We could
potentially add a patch to bump it to 3GB on top of this one.
This probably should be CC: stable - but I'll let the powers that be
decide that one.
Regression from a revert of the revert:
commit 7907f45bf9f67a1c5e5d4ae05bab428d7c2f43b2
Author: Ben Widawsky <benjamin.widawsky@intel.com>
Date: Wed Feb 19 22:05:46 2014 -0800
Revert "drm/i915/bdw: Limit GTT to 2GB"
v2: Change ifdef to 32b, instead of ifndef
update comment
v3. Update comment to not wrap (Daniel).
Update commit message
v4: s/CONFIG_32/CONFIG_X86_32 (Jani).
v5: s/CONFIG_x86_32BIT/CONFIG_x86_32, as meant in v4
s/32B/32b (chris)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76619
Cc: stable@vger.kernel.org
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Tested-by: "Yang, Guang A" <guang.a.yang@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Apparently it does more harm than good. Thomas Richter reports that
it helps his machine (Thinkpad X31) and there's another report from a
Fujitsu S6010. Also, we've nuked it on i845G already to make Chris'
machine happy.
Cc: Thomas Richter <richter@rus.uni-stuttgart.de>
References: http://mid.mail-archive.com/538C54E0.8090507@rus.uni-stuttgart.de
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Atm, we refcount both power domains and power wells and
intel_display_power_enabled_sw() returns the power domain refcount. What
the callers are really interested in though is the sw state of the
underlying power wells. Due to this we will report incorrectly that a
given power domain is off if its power wells were enabled via another
power domain, for example POWER_DOMAIN_INIT which enables all power
wells.
As a fix return instead the state based on the refcount of all power
wells included in the passed in power domain.
References: https://bugs.freedesktop.org/show_bug.cgi?id=79505
References: https://bugs.freedesktop.org/show_bug.cgi?id=79038
Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
It is possible for userspace to create a big object large enough for a
256x256, and then switch over to using it as a 64x64 cursor. This
requires the cursor update routines to check for a change in width on
every update, rather than just when the cursor is originally enabled.
This also fixes an issue with 845g/865g which cannot change the base
address of the cursor whilst it is active.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[Antti:rebased, adjusted macro names and moved some lines, no functional
changes]
Reviewed-by: Antti Koskipaa <antti.koskipaa@linux.intel.com>
Tested-by: Antti Koskipaa <antti.koskipaa@linux.intel.com>
Cc: stable@vger.kernel.org
Testcase: igt/kms_cursor_crc/cursor-size-change
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
If both KMS is disabled (by i915.modeset=0 or nomodeset parameters) and
UMS is disabled (by CONFIG_DRM_I915_UMS=n, the default), the user might
not be aware his setup is not supported. Inform the users (and, by
extension, the poor i915 developers having to read their dmesgs in bug
reports) why their graphics experience might be lacking.
A similar message was added on the UMS path in
commit e147accbd19f55489dabdcc4dc3551cc3e3f2553
Author: Jani Nikula <jani.nikula@intel.com>
Date: Thu Oct 10 15:25:37 2013 +0300
drm/i915: tell the user KMS is required for gen6+
but it won't be reached if CONFIG_DRM_I915_UMS=n since
commit b30324adaf8d2e5950a602bde63030d15a61826f
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Wed Nov 13 22:11:25 2013 +0100
drm/i915: Deprecated UMS support
v2: Use DRM_DEBUG_DRIVER.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Pull the parameter checking from drm_primary_helper_update() out into
its own function; drivers that provide their own setplane()
implementations rather than using the helper may still want to share
this parameter checking logic.
A few of the checks here were also updated based on suggestions by
Ville Syrjälä.
v3:
- s/primary_helper/plane_helper/ --- this checking logic may be useful
for other types of planes as well.
- Fix visibility check (need to dereference visibility pointer)
v2:
- Pass src/dest/clip rects and min/max scaling down to helper to avoid
duplication of effort between helper and drivers (suggested by
Ville).
- Allow caller to specify whether the primary plane should be
updatable while the crtc is disabled.
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Chon Ming Lee <chon.ming.lee@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
[danvet: Include header properly and fixup declaration mismatch to
make this compile.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
The DRM core setplane code should check that the plane is usable on the
specified CRTC before calling into the driver.
Prior to this patch, a plane's possible_crtcs field was purely
informational for userspace and was never actually verified at the
kernel level (aside from the primary plane helper).
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Chon Ming Lee <chon.ming.lee@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Some platforms may not have it, and enumerating it is both confusing and
time consuming due to the hotplug and DDC probing.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Gen2 doesn't have the ring idle/stop bits in the SCPD/MI_MODE register,
so don't go spewing warnings about the state of those bits.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
If the user tries to mmap through the GTT an object that is marked as
snooped, we report an error rather than allow the GPU to hang the
machine. The choice of EINVAL, however, was unfortunate as we turn that
into a WARN rather than a quiet SIGBUS.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
Move the MI_ARB_STATE MI_ARB_C3_LP_WRITE_ENABLE setup to
gen3_init_clock_gating() from i915_gem_load() when KMS is enabled. Leave
it in i915_gem_load() for the UMS case, but add an explcit check, just
to make it easier to spot it when we eventually rip out UMS support.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
85x also has a similar AGPBUSY# bit as gen3. Enable it to make
sure vblank interrupts don't get dealyed during C3 state.
There's also another bit which controls whether AGPBUSY# is asserted
based on pending cacheable cycles and interrupts, or just based on
pending commands in the ring and interrupts. Select the cacheable
cycles mode since that seems to be the new way of doing things in
85x, and it does give slightly better C3 residency numbers with
glxgears running.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
My Gen3 Bspec lists the AGPBUSY# bit in INSTPM as an enable bit rather
than a disable bit. Our code has the opposite idea. Make the code match
the spec.
Might fix some gen3 C3 related interrupt delivery problems. Untested
due to lack of hardware.
v2: call it AGPBUSY_INT_EN to make it clearer it has to do with interrupts
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
I don't see why we wouldn't want interrupts to wake up the CPU from C3
always, so just set the AGPBUSY# bit in gen3_init_clock_gating().
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
When doing this, all PLLs should be disabled.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
We need to do this anytime we power gate the DPIO common well.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
New codec support for ALC891.
Signed-off-by: Kailang Yang <kailang@realtek.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
There may be a dependency here.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
This needs to be done before we power back on the CMN_BC well so the PHY
can calibrate properly.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
We do this at runtime and later on now.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|