Age | Commit message (Collapse) | Author |
|
Given libbpf is a generic library and not restricted to x86-64 only,
the compiler barrier in bpf_perf_event_read_simple() after fetching
the head needs to be replaced with smp_rmb() at minimum. Also, writing
out the tail we should use WRITE_ONCE() to avoid store tearing.
Now that we have the logic in place in ring_buffer_read_head() and
ring_buffer_write_tail() helper also used by perf tool which would
select the correct and best variant for a given architecture (e.g.
x86-64 can avoid CPU barriers entirely), make use of these in order
to fix bpf_perf_event_read_simple().
Fixes: d0cabbb021be ("tools: bpf: move the event reading loop to libbpf")
Fixes: 39111695b1b8 ("samples: bpf: add bpf_perf_event_output example")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Currently, on x86-64, perf uses LFENCE and MFENCE (rmb() and mb(),
respectively) when processing events from the perf ring buffer which
is unnecessarily expensive as we can do more lightweight in particular
given this is critical fast-path in perf.
According to Peter rmb()/mb() were added back then via a94d342b9cb0
("tools/perf: Add required memory barriers") at a time where kernel
still supported chips that needed it, but nowadays support for these
has been ditched completely, therefore we can fix them up as well.
While for x86-64, replacing rmb() and mb() with smp_*() variants would
result in just a compiler barrier for the former and LOCK + ADD for
the latter (__sync_synchronize() uses slower MFENCE by the way), Peter
suggested we can use smp_{load_acquire,store_release}() instead for
architectures where its implementation doesn't resolve in slower smp_mb().
Thus, e.g. in x86-64 we would be able to avoid CPU barrier entirely due
to TSO. For architectures where the latter needs to use smp_mb() e.g.
on arm, we stick to cheaper smp_rmb() variant for fetching the head.
This work adds helpers ring_buffer_read_head() and ring_buffer_write_tail()
for tools infrastructure that either switches to smp_load_acquire() for
architectures where it is cheaper or uses READ_ONCE() + smp_rmb() barrier
for those where it's not in order to fetch the data_head from the perf
control page, and it uses smp_store_release() to write the data_tail.
Latter is smp_mb() + WRITE_ONCE() combination or a cheaper variant if
architecture allows for it. Those that rely on smp_rmb() and smp_mb() can
further improve performance in a follow up step by implementing the two
under tools/arch/*/include/asm/barrier.h such that they don't have to
fallback to rmb() and mb() in tools/include/asm/barrier.h.
Switch perf to use ring_buffer_read_head() and ring_buffer_write_tail()
so it can make use of the optimizations. Later, we convert libbpf as
well to use the same helpers.
Side note [0]: the topic has been raised of whether one could simply use
the C11 gcc builtins [1] for the smp_load_acquire() and smp_store_release()
instead:
__atomic_load_n(ptr, __ATOMIC_ACQUIRE);
__atomic_store_n(ptr, val, __ATOMIC_RELEASE);
Kernel and (presumably) tooling shipped along with the kernel has a
minimum requirement of being able to build with gcc-4.6 and the latter
does not have C11 builtins. While generally the C11 memory models don't
align with the kernel's, the C11 load-acquire and store-release alone
/could/ suffice, however. Issue is that this is implementation dependent
on how the load-acquire and store-release is done by the compiler and
the mapping of supported compilers must align to be compatible with the
kernel's implementation, and thus needs to be verified/tracked on a
case by case basis whether they match (unless an architecture uses them
also from kernel side). The implementations for smp_load_acquire() and
smp_store_release() in this patch have been adapted from the kernel side
ones to have a concrete and compatible mapping in place.
[0] http://patchwork.ozlabs.org/patch/985422/
[1] https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Fixes: 371e4fcc9d96 ("selftests/bpf: cgroup local storage-based network counters")
Fixes: 370920c47b26 ("selftests/bpf: Test libbpf_{prog,attach}_type_by_name")
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Mauricio Vasquez says:
====================
In some applications this is needed have a pool of free elements, for
example the list of free L4 ports in a SNAT. None of the current maps allow
to do it as it is not possible to get any element without having they key
it is associated to, even if it were possible, the lack of locking mecanishms in
eBPF would do it almost impossible to be implemented without data races.
This patchset implements two new kind of eBPF maps: queue and stack.
Those maps provide to eBPF programs the peek, push and pop operations, and for
userspace applications a new bpf_map_lookup_and_delete_elem() is added.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
v2 -> v3:
- Remove "almost dead code" in syscall.c
- Remove unnecessary copy_from_user in bpf_map_lookup_and_delete_elem
- Rebase
v1 -> v2:
- Put ARG_PTR_TO_UNINIT_MAP_VALUE logic into a separated patch
- Fix missing __this_cpu_dec & preempt_enable calls in kernel/bpf/syscall.c
RFC v4 -> v1:
- Remove roundup to power of 2 in memory allocation
- Remove count and use a free slot to check if queue/stack is empty
- Use if + assigment for wrapping indexes
- Fix some minor style issues
- Squash two patches together
RFC v3 -> RFC v4:
- Revert renaming of kernel/bpf/stackmap.c
- Remove restriction on value size
- Remove len arguments from peek/pop helpers
- Add new ARG_PTR_TO_UNINIT_MAP_VALUE
RFC v2 -> RFC v3:
- Return elements by value instead that by reference
- Implement queue/stack base on array and head + tail indexes
- Rename stack trace related files to avoid confusion and conflicts
RFC v1 -> RFC v2:
- Create two separate maps instead of single one + flags
- Implement bpf_map_lookup_and_delete syscall
- Support peek operation
- Define replacement policy through flags in the update() method
- Add eBPF side tests
====================
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
test_maps:
Tests that queue/stack maps are behaving correctly even in corner cases
test_progs:
Tests new ebpf helpers
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Sync both files.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
The previous patch implemented a bpf queue/stack maps that
provided the peek/pop/push functions. There is not a direct
relationship between those functions and the current maps
syscalls, hence a new MAP_LOOKUP_AND_DELETE_ELEM syscall is added,
this is mapped to the pop operation in the queue/stack maps
and it is still to implement in other kind of maps.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Queue/stack maps implement a FIFO/LIFO data storage for ebpf programs.
These maps support peek, pop and push operations that are exposed to eBPF
programs through the new bpf_map[peek/pop/push] helpers. Those operations
are exposed to userspace applications through the already existing
syscalls in the following way:
BPF_MAP_LOOKUP_ELEM -> peek
BPF_MAP_LOOKUP_AND_DELETE_ELEM -> pop
BPF_MAP_UPDATE_ELEM -> push
Queue/stack maps are implemented using a buffer, tail and head indexes,
hence BPF_F_NO_PREALLOC is not supported.
As opposite to other maps, queue and stack do not use RCU for protecting
maps values, the bpf_map[peek/pop] have a ARG_PTR_TO_UNINIT_MAP_VALUE
argument that is a pointer to a memory zone where to save the value of a
map. Basically the same as ARG_PTR_TO_UNINIT_MEM, but the size has not
be passed as an extra argument.
Our main motivation for implementing queue/stack maps was to keep track
of a pool of elements, like network ports in a SNAT, however we forsee
other use cases, like for exampling saving last N kernel events in a map
and then analysing from userspace.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
ARG_PTR_TO_UNINIT_MAP_VALUE argument is a pointer to a memory zone
used to save the value of a map. Basically the same as
ARG_PTR_TO_UNINIT_MEM, but the size has not be passed as an extra
argument.
This will be used in the following patch that implements some new
helpers that receive a pointer to be filled with a map value.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
This commit adds the required logic to allow key being NULL
in case the key_size of the map is 0.
A new __bpf_copy_key function helper only copies the key from
userpsace when key_size != 0, otherwise it enforces that key must be
null.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
In the following patches queue and stack maps (FIFO and LIFO
datastructures) will be implemented. In order to avoid confusion and
a possible name clash rename stack_map_ops to stack_trace_map_ops
Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Add wrappers for the PDC_PAT_CELL_GET_INFO and
PDC_PAT_PD_GET_PDC_INTERF_REV PAT PDC subfunctions.
Both provide access to the PAT capability bitfield which can guide us if
simultaneous PTLBs are allowed on the bus, and if firmware will
rendezvous all processors within PDCE_Check in case of an HPMC.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
Remove two instruction from the hot path. The temporary move to %r9 is
unneccessary, and the zero-inialization of pte happens twice.
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
The zdep and depw,z mnemonics generate the same code. The assembler will
accept the depw,z mnemonic when generating PA 1.x code. The zdep
mnemonic is okay when generating PA 2.0 code. This patch changes depw,z
to zdep in the current shlw macro, while the binary code will be the
same.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: John David Anglin <dave.anglin@bell.net>
|
|
Register the MDT segments, custom dumpfn and private data with the
remoteproc core dump functionality.
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
The per segment dump function is responsible for loading the mba
before device memory segments associated with coredump can be populated
and for cleaning up the resources post coredump.
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
Refactor re-useable parts of mba load/unload sequence into mba_load and
mba_reclaim respectively. This is done in order to prevent code duplication
for modem coredump, which requires the mba to be loaded before dumping
the segments. The following changes in functionality are intended:
* Add software bypass to avoid high MX current in mpss error path.
* Remove the proxy votes of clk/regs only after the active/reset clks/regs.
* Reclaim MBA memory after mpss_load failure in mba_reclaim func.
* Set/Unset the dump_mba_loaded flag on mba_load/mba_reclaim respectively.
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
This patch adds a mechanism for assigning each rproc dump segment with
a custom dump function and private data. The dump function is to be
called for each rproc segment during coredump if assigned.
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
[bjorn: reordred arguments to rproc_coredump_add_custom_segment()]
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
This simply adds the field to 'struct perf_evsel' and allows setting
it via the event parser, to test it lets trace trace:
First look at where in a function that receives an evsel we can put a probe
to read how evsel->max_events was setup:
# perf probe -x ~/bin/perf -L trace__event_handler
<trace__event_handler@/home/acme/git/perf/tools/perf/builtin-trace.c:0>
0 static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel,
union perf_event *event __maybe_unused,
struct perf_sample *sample)
3 {
4 struct thread *thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
5 int callchain_ret = 0;
7 if (sample->callchain) {
8 callchain_ret = trace__resolve_callchain(trace, evsel, sample, &callchain_cursor);
9 if (callchain_ret == 0) {
10 if (callchain_cursor.nr < trace->min_stack)
11 goto out;
12 callchain_ret = 1;
}
}
See what variables we can probe at line 7:
# perf probe -x ~/bin/perf -V trace__event_handler:7
Available variables at trace__event_handler:7
@<trace__event_handler+89>
int callchain_ret
struct perf_evsel* evsel
struct perf_sample* sample
struct thread* thread
struct trace* trace
union perf_event* event
Add a probe at that line asking for evsel->max_events to be collected and named
as "max_events":
# perf probe -x ~/bin/perf trace__event_handler:7 'max_events=evsel->max_events'
Added new event:
probe_perf:trace__event_handler (on trace__event_handler:7 in /home/acme/bin/perf with max_events=evsel->max_events)
You can now use it in all perf tools, such as:
perf record -e probe_perf:trace__event_handler -aR sleep 1
Now use 'perf trace', here aliased to just 'trace' and trace trace, i.e.
the first 'trace' is tracing just that 'probe_perf:trace__event_handler' event,
while the traced trace is tracing all scheduler tracepoints, will stop at two
events (--max-events 2) and will just set evsel->max_events for all the sched
tracepoints to 9, we will see the output of both traces intermixed:
# trace -e *perf:*event_handler trace --max-events 2 -e sched:*/nr=9/
0.000 :0/0 sched:sched_waking:comm=rcu_sched pid=10 prio=120 target_cpu=000
0.009 :0/0 sched:sched_wakeup:comm=rcu_sched pid=10 prio=120 target_cpu=000
0.000 trace/23949 probe_perf:trace__event_handler:(48c34a) max_events=0x9
0.046 trace/23949 probe_perf:trace__event_handler:(48c34a) max_events=0x9
#
Now, if the traced trace sends its output to /dev/null, we'll see just
what the first level trace outputs: that evsel->max_events is indeed
being set to 9:
# trace -e *perf:*event_handler trace -o /dev/null --max-events 2 -e sched:*/nr=9/
0.000 trace/23961 probe_perf:trace__event_handler:(48c34a) max_events=0x9
0.030 trace/23961 probe_perf:trace__event_handler:(48c34a) max_events=0x9
#
Now that we can set evsel->max_events, we can go to the next step, honour that
per-event property in 'perf trace'.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-og00yasj276joem6e14l1eas@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Remove the coherent buffer __iomem cookie because the buffer is
allocated from dma_alloc_coherent().
warning: incorrect type in assignment (different address spaces)
expected unsigned char [noderef] [usertype] <asn:2>*virt_base
got void *[assigned] mem
warning: incorrect type in argument 3 (different address spaces)
expected void *cpu_addr
got unsigned char [noderef] [usertype] <asn:2>*virt_base
Signed-off-by: Laurence Rochfort <laurence.rochfort@gmail.com>
Reviewed-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Debug logs for device-specific callback invocation aren't very useful,
remove.
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Remove very noisy debug logs that also contain typos and incorrect
output formats.
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
memory allocation
Use sizeof(*ptr) instead of sizeof(ptr_type) makes memory
allocation easy in case the type of pointer changes.
Fix all checkpatch reported issues for "CHECK: Prefer
kzalloc(sizeof(*<p>)...) over kzalloc(sizeof(struct <P>)...)".
Signed-off-by: Mamta Shukla <mamtashukla555@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Remove extra blank line. Issue found by checkpatch.pl.
Signed-off-by: Maya Nakamura <m.maya.nakamura@gmail.com>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Reviewed-by: Vaishali Thakkar <vthakkar@vaishalithakkar.in>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Remove extra blank line. Issue found by checkpatch.pl.
Signed-off-by: Maya Nakamura <m.maya.nakamura@gmail.com>
Reviewed-by: Vaishali Thakkar <vthakkar@vaishalithakkar.in>
Acked-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Fix the spelling mistake in enumerator identifier
RESIZER_MODE_CONTINIOUS. 'CONTINIOUS' should be 'CONTINUOUS'. Issue
found by checkpatch.
Signed-off-by: Kimberly Brown <kimbrownkd@gmail.com>
Reviewed-by: Vaishali Thakkar <vthakkar@vaishalithakkar.in>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Add a pair of braces to make all arms of the if statement consistent.
Issue found by checkpatch.pl.
Signed-off-by: Maya Nakamura <m.maya.nakamura@gmail.com>
Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Replace long int with long as int is unnecessary according to the
checkpatch.pl warning. K&R write, 'The word int can be omitted... and
typically is.'
Signed-off-by: Maya Nakamura <m.maya.nakamura@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Introduce custom dump function and private data per remoteproc dump
segment. The dump function is responsible for filling the device memory
segment associated with coredump
Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
net/sched/cls_api.c has overlapping changes to a call to
nlmsg_parse(), one (from 'net') added rtm_tca_policy instead of NULL
to the 5th argument, and another (from 'net-next') added cb->extack
instead of NULL to the 6th argument.
net/ipv4/ipmr_base.c is a case of a bug fix in 'net' being done to
code which moved (to mr_table_dump)) in 'net-next'. Thanks to David
Ahern for the heads up.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If PARPORT_PC_FIFO is not enabled, do not provide the dma lock
macros and lock definition. Otherwise:
./arch/sparc/include/asm/parport.h:24:24: warning: ‘dma_spin_lock’ defined but not used [-Wunused-variable]
static DEFINE_SPINLOCK(dma_spin_lock);
^~~~~~~~~~~~~
./include/linux/spinlock_types.h:81:39: note: in definition of macro ‘DEFINE_SPINLOCK’
#define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit 6fe9487892b32cb1c8b8b0d552ed7222a527fe30.
It is causing more serious regressions than the RCU warning
it is fixing.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There is no need to have the 'struct rocker_desc_info *desc_info'
variable static since new value always be assigned before use it.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
synaptics_detect() does not check whether sending commands to the
device succeeds and instead relies on getting unique data from the
device. Let's make sure we seed entire buffer with zeroes to make sure
we will not use garbage on stack that just happen to be 0x47.
Reported-by: syzbot+13cb3b01d0784e4ffc3f@syzkaller.appspotmail.com
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
I wrote:
"USB fixes for 4.19-final
Here are a small number of last-minute USB driver fixes
Included here are:
- spectre fix for usb storage gadgets
- xhci fixes
- cdc-acm fixes
- usbip fixes for reported problems
All of these have been in linux-next with no reported issues."
* tag 'usb-4.19-final' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
usb: gadget: storage: Fix Spectre v1 vulnerability
USB: fix the usbfs flag sanitization for control transfers
usb: xhci: pci: Enable Intel USB role mux on Apollo Lake platforms
usb: roles: intel_xhci: Fix Unbalanced pm_runtime_enable
cdc-acm: correct counting of UART states in serial state notification
cdc-acm: do not reset notification buffer index upon urb unlinking
cdc-acm: fix race between reset and control messaging
usb: usbip: Fix BUG: KASAN: slab-out-of-bounds in vhci_hub_control()
selftests: usbip: add wait after attach and before checking port status
|
|
Jens writes:
"Block fixes for 4.19-final
Two small fixes that should go into this release."
* tag 'for-linus-20181019' of git://git.kernel.dk/linux-block:
block: don't deal with discard limit in blkdev_issue_discard()
nvme: remove ns sibling before clearing path
|
|
It seems we have some leftovers from times when 'unrestricted guest'
wasn't exposed to L1. Stop shadowing GUEST_CS_{BASE,LIMIT,AR_SELECTOR}
and GUEST_ES_BASE, shadow GUEST_SS_AR_BYTES as it was found that some
hypervisors (e.g. Hyper-V without Enlightened VMCS) access it pretty
often.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
|
|
Add touchscreen platform data for the Onda V80 Plus v3 tablet.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
|
Add touchscreen info for the Trekstor Primetab T13B tablet.
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Marian Cepok <marian.cepok@gmail.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
|
Replace custom grown macro with generic INTEL_CPU_FAM6() one.
No functional change intended.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
|
On some Goldmont based systems such as ASRock J3455M the BIOS may not
enable the IPC1 device that provides access to the PMC and PUNIT. In
such scenarios, the IOSS and PSS resources from the platform device can
not be obtained and result in a invalid telemetry_plt_config which is an
internal data structure that holds platform config and is maintained by
the telemetry platform driver.
This is also applicable to the platforms where the BIOS supports IPC1
device under debug configurations but IPC1 is disabled by user or the
policy.
This change allows user to know the reason for not seeing entries under
/sys/kernel/debug/telemetry/* when there is no apparent failure at boot.
Cc: Matt Turner <matt.turner@intel.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Souvik Kumar Chakravarty <souvik.k.chakravarty@intel.com>
Cc: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@intel.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=198779
Acked-by: Matt Turner <matt.turner@intel.com>
Signed-off-by: Rajneesh Bhardwaj <rajneesh.bhardwaj@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
|
Remove Souvik who has left this role. Add Rajneesh and David who work
jointly on telemetry updates for new platforms.
Signed-off-by: David E. Box <david.e.box@intel.com>
Signed-off-by: Rajneesh Bhardwaj <rajneesh.bhardwaj@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
|
A driver for LG Gram laptop supporting features not available through the
standard interfaces:
- Support for the 5 Fn keys that generate ACPI or WMI events.
- Two software controlled LEDs: keyboard backlight (also controlled by
hardware) and touchpad LED.
- Extra features: reader mode, Fn lock, cooling mode, USB charge mode, and
maximal battery charging level.
Signed-off-by: Matan Ziv-Av <matan@svgalib.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
|
|
Fixes: 18178ff86217 ("KVM: selftests: add Enlightened VMCS test")
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The MCLK clock is made optional for cs42l51 codec.
However, ASoC DAPM clock supply widget, expects the clock to be defined
unconditionally.
Register MCLK DAPM conditionally in codec driver,
depending on clock presence in DT.
Fixes: 5e8d63a726f8 ("ASoC: cs42l51: add mclk support")
Signed-off-by: Olivier Moysan <olivier.moysan@st.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Pull NVMe updates from Christoph:
"The second batch of updates for Linux 4.20:
- lot of fixes for issues found by static type checkers from Bart
- two small fixes from Keith
- fabrics cleanups in preparation of the TCP transport from Sagi
- more cleanups from Chaitanya"
* 'nvme-4.20' of git://git.infradead.org/nvme:
nvme-fabrics: move controller options matching to fabrics
nvme-rdma: always have a valid trsvcid
nvme-pci: remove duplicate check
nvme-pci: fix hot removal during error handling
nvmet-fcloop: suppress a compiler warning
nvme-core: make implicit seed truncation explicit
nvmet-fc: fix kernel-doc headers
nvme-fc: rework the request initialization code
nvme-fc: introduce struct nvme_fcp_op_w_sgl
nvme-fc: fix kernel-doc headers
nvmet: avoid integer overflow in the discard code
nvmet-rdma: declare local symbols static
nvmet: use strlcpy() instead of strcpy()
nvme-pci: fix nvme_suspend_queue() kernel-doc header
nvme-core: rework a NQN copying operation
nvme-core: declare local symbols static
nvmet-rdma: check for timeout in nvme_rdma_wait_for_cm()
nvmet: use strcmp() instead of strncmp() for subsystem lookup
nvmet: remove unreachable code
nvme: update node paths after adding new path
|
|
Commit 1e77d0a1ed74 ("genirq: Sanitize spurious interrupt detection of
threaded irqs") made detection of spurious interrupts work for threaded
handlers by:
a) incrementing a counter every time the thread returns IRQ_HANDLED, and
b) checking whether that counter has increased every time the thread is
woken.
However for oneshot interrupts, the commit unmasks the interrupt before
incrementing the counter. If another interrupt occurs right after
unmasking but before the counter is incremented, that interrupt is
incorrectly considered spurious:
time
| irq_thread()
| irq_thread_fn()
| action->thread_fn()
| irq_finalize_oneshot()
| unmask_threaded_irq() /* interrupt is unmasked */
|
| /* interrupt fires, incorrectly deemed spurious */
|
| atomic_inc(&desc->threads_handled); /* counter is incremented */
v
This is observed with a hi3110 CAN controller receiving data at high volume
(from a separate machine sending with "cangen -g 0 -i -x"): The controller
signals a huge number of interrupts (hundreds of millions per day) and
every second there are about a dozen which are deemed spurious.
In theory with high CPU load and the presence of higher priority tasks, the
number of incorrectly detected spurious interrupts might increase beyond
the 99,900 threshold and cause disablement of the interrupt.
In practice it just increments the spurious interrupt count. But that can
cause people to waste time investigating it over and over.
Fix it by moving the accounting before the invocation of
irq_finalize_oneshot().
[ tglx: Folded change log update ]
Fixes: 1e77d0a1ed74 ("genirq: Sanitize spurious interrupt detection of threaded irqs")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Mathias Duckeck <m.duckeck@kunbus.de>
Cc: Akshay Bhat <akshay.bhat@timesys.com>
Cc: Casey Fitzpatrick <casey.fitzpatrick@timesys.com>
Cc: stable@vger.kernel.org # v3.16+
Link: https://lkml.kernel.org/r/1dfd8bbd16163940648045495e3e9698e63b50ad.1539867047.git.lukas@wunner.de
|
|
Allow stopping tracing after a number of events take place, considering
strace-like syscalls formatting as one event per enter/exit pair or when
in a multi-process tracing session a syscall is interrupted and printed
ending with '...'.
Examples included in the documentation:
Trace the first 4 open, openat or open_by_handle_at syscalls (in the future more syscalls may match here):
$ perf trace -e open* --max-events 4
[root@jouet perf]# trace -e open* --max-events 4
2272.992 ( 0.037 ms): gnome-shell/1370 openat(dfd: CWD, filename: /proc/self/stat) = 31
2277.481 ( 0.139 ms): gnome-shell/3039 openat(dfd: CWD, filename: /proc/self/stat) = 65
3026.398 ( 0.076 ms): gnome-shell/3039 openat(dfd: CWD, filename: /proc/self/stat) = 65
4294.665 ( 0.015 ms): sed/15879 openat(dfd: CWD, filename: /etc/ld.so.cache, flags: CLOEXEC) = 3
$
Trace the first minor page fault when running a workload:
# perf trace -F min --max-stack=7 --max-events 1 sleep 1
0.000 ( 0.000 ms): sleep/18006 minfault [__clear_user+0x1a] => 0x5626efa56080 (?k)
__clear_user ([kernel.kallsyms])
load_elf_binary ([kernel.kallsyms])
search_binary_handler ([kernel.kallsyms])
__do_execve_file.isra.33 ([kernel.kallsyms])
__x64_sys_execve ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
#
Trace the next min page page fault to take place on the first CPU:
# perf trace -F min --call-graph=dwarf --max-events 1 --cpu 0
0.000 ( 0.000 ms): Web Content/17136 minfault [js::gc::Chunk::fetchNextDecommittedArena+0x4b] => 0x7fbe6181b000 (?.)
js::gc::FreeSpan::initAsEmpty (inlined)
js::gc::Arena::setAsNotAllocated (inlined)
js::gc::Chunk::fetchNextDecommittedArena (/usr/lib64/firefox/libxul.so)
js::gc::Chunk::allocateArena (/usr/lib64/firefox/libxul.so)
js::gc::GCRuntime::allocateArena (/usr/lib64/firefox/libxul.so)
js::gc::ArenaLists::allocateFromArena (/usr/lib64/firefox/libxul.so)
js::gc::GCRuntime::tryNewTenuredThing<JSString, (js::AllowGC)1> (inlined)
js::AllocateString<JSString, (js::AllowGC)1> (/usr/lib64/firefox/libxul.so)
js::Allocate<JSThinInlineString, (js::AllowGC)1> (inlined)
JSThinInlineString::new_<(js::AllowGC)1> (inlined)
AllocateInlineString<(js::AllowGC)1, unsigned char> (inlined)
js::ConcatStrings<(js::AllowGC)1> (/usr/lib64/firefox/libxul.so)
[0x18b26e6bc2bd] (/tmp/perf-17136.map)
Tracing the next four ext4 operations on a specific CPU:
# perf trace -e ext4:*/call-graph=fp/ --max-events 4 --cpu 3
0.000 mutt/3849 ext4:ext4_es_lookup_extent_enter:dev 253,2 ino 57277 lblk 0
ext4_es_lookup_extent ([kernel.kallsyms])
read (/usr/lib64/libc-2.26.so)
0.097 mutt/3849 ext4:ext4_es_lookup_extent_exit:dev 253,2 ino 57277 found 0 [0/0) 0
ext4_es_lookup_extent ([kernel.kallsyms])
read (/usr/lib64/libc-2.26.so)
0.141 mutt/3849 ext4:ext4_ext_map_blocks_enter:dev 253,2 ino 57277 lblk 0 len 1 flags
ext4_ext_map_blocks ([kernel.kallsyms])
read (/usr/lib64/libc-2.26.so)
0.184 mutt/3849 ext4:ext4_ext_load_extent:dev 253,2 ino 57277 lblk 1516511 pblk 18446744071750013657
__read_extent_tree_block ([kernel.kallsyms])
__read_extent_tree_block ([kernel.kallsyms])
ext4_find_extent ([kernel.kallsyms])
ext4_ext_map_blocks ([kernel.kallsyms])
ext4_map_blocks ([kernel.kallsyms])
ext4_mpage_readpages ([kernel.kallsyms])
read_pages ([kernel.kallsyms])
__do_page_cache_readahead ([kernel.kallsyms])
ondemand_readahead ([kernel.kallsyms])
generic_file_read_iter ([kernel.kallsyms])
__vfs_read ([kernel.kallsyms])
vfs_read ([kernel.kallsyms])
ksys_read ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms])
read (/usr/lib64/libc-2.26.so)
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Rudá Moura <ruda.moura@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-sweh107bs7ol5bzls0m4tqdz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
For completeness, will be used in 'perf trace --max-events'.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: https://lkml.kernel.org/n/tip-glaj3pwespxfj2fdjs9a20b6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|