summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-03-28mm/kmemleak.c: wait for scan completion before disabling freeVinayak Menon
A crash is observed when kmemleak_scan accesses the object->pointer, likely due to the following race. TASK A TASK B TASK C kmemleak_write (with "scan" and NOT "scan=on") kmemleak_scan() create_object kmem_cache_alloc fails kmemleak_disable kmemleak_do_cleanup kmemleak_free_enabled = 0 kfree kmemleak_free bails out (kmemleak_free_enabled is 0) slub frees object->pointer update_checksum crash - object->pointer freed (DEBUG_PAGEALLOC) kmemleak_do_cleanup waits for the scan thread to complete, but not for direct call to kmemleak_scan via kmemleak_write. So add a wait for kmemleak_scan completion before disabling kmemleak_free, and while at it fix the comment on stop_scan_thread. [vinmenon@codeaurora.org: fix stop_scan_thread comment] Link: http://lkml.kernel.org/r/1522219972-22809-1-git-send-email-vinmenon@codeaurora.org Link: http://lkml.kernel.org/r/1522063429-18992-1-git-send-email-vinmenon@codeaurora.org Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-03-28mm/memcontrol.c: fix parameter description mismatchHonglei Wang
There are a couple of places where parameter description and function name do not match the actual code. Fix it. Link: http://lkml.kernel.org/r/1520843448-17347-1-git-send-email-honglei.wang@oracle.com Signed-off-by: Honglei Wang <honglei.wang@oracle.com> Acked-by: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-03-28mm/vmstat.c: fix vmstat_update() preemption BUGSteven J. Hill
Attempting to hotplug CPUs with CONFIG_VM_EVENT_COUNTERS enabled can cause vmstat_update() to report a BUG due to preemption not being disabled around smp_processor_id(). Discovered on Ubiquiti EdgeRouter Pro with Cavium Octeon II processor. BUG: using smp_processor_id() in preemptible [00000000] code: kworker/1:1/269 caller is vmstat_update+0x50/0xa0 CPU: 0 PID: 269 Comm: kworker/1:1 Not tainted 4.16.0-rc4-Cavium-Octeon-00009-gf83bbd5-dirty #1 Workqueue: mm_percpu_wq vmstat_update Call Trace: show_stack+0x94/0x128 dump_stack+0xa4/0xe0 check_preemption_disabled+0x118/0x120 vmstat_update+0x50/0xa0 process_one_work+0x144/0x348 worker_thread+0x150/0x4b8 kthread+0x110/0x140 ret_from_kernel_thread+0x14/0x1c Link: http://lkml.kernel.org/r/1520881552-25659-1-git-send-email-steven.hill@cavium.com Signed-off-by: Steven J. Hill <steven.hill@cavium.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-03-28mm/page_owner: fix recursion bug after changing skip entriesManinder Singh
This patch fixes commit 5f48f0bd4e36 ("mm, page_owner: skip unnecessary stack_trace entries"). Because if we skip first two entries then logic of checking count value as 2 for recursion is broken and code will go in one depth recursion. so we need to check only one call of _RET_IP(__set_page_owner) while checking for recursion. Current Backtrace while checking for recursion:- (save_stack) from (__set_page_owner) // (But recursion returns true here) (__set_page_owner) from (get_page_from_freelist) (get_page_from_freelist) from (__alloc_pages_nodemask) (__alloc_pages_nodemask) from (depot_save_stack) (depot_save_stack) from (save_stack) // recursion should return true here (save_stack) from (__set_page_owner) (__set_page_owner) from (get_page_from_freelist) (get_page_from_freelist) from (__alloc_pages_nodemask+) (__alloc_pages_nodemask) from (depot_save_stack) (depot_save_stack) from (save_stack) (save_stack) from (__set_page_owner) (__set_page_owner) from (get_page_from_freelist) Correct Backtrace with fix: (save_stack) from (__set_page_owner) // recursion returned true here (__set_page_owner) from (get_page_from_freelist) (get_page_from_freelist) from (__alloc_pages_nodemask+) (__alloc_pages_nodemask) from (depot_save_stack) (depot_save_stack) from (save_stack) (save_stack) from (__set_page_owner) (__set_page_owner) from (get_page_from_freelist) Link: http://lkml.kernel.org/r/1521607043-34670-1-git-send-email-maninder1.s@samsung.com Fixes: 5f48f0bd4e36 ("mm, page_owner: skip unnecessary stack_trace entries") Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Signed-off-by: Vaneet Narang <v.narang@samsung.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Michal Hocko <mhocko@suse.com> Cc: Oscar Salvador <osalvador@techadventures.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ayush Mittal <ayush.m@samsung.com> Cc: Prakash Gupta <guptap@codeaurora.org> Cc: Vinayak Menon <vinmenon@codeaurora.org> Cc: Vasyl Gomonovych <gomonovych@gmail.com> Cc: Amit Sahrawat <a.sahrawat@samsung.com> Cc: <pankaj.m@samsung.com> Cc: Vaneet Narang <v.narang@samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-03-28ipc/shm.c: add split function to shm_vm_opsMike Kravetz
If System V shmget/shmat operations are used to create a hugetlbfs backed mapping, it is possible to munmap part of the mapping and split the underlying vma such that it is not huge page aligned. This will untimately result in the following BUG: kernel BUG at /build/linux-jWa1Fv/linux-4.15.0/mm/hugetlb.c:3310! Oops: Exception in kernel mode, sig: 5 [#1] LE SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: kcm nfc af_alg caif_socket caif phonet fcrypt CPU: 18 PID: 43243 Comm: trinity-subchil Tainted: G C E 4.15.0-10-generic #11-Ubuntu NIP: c00000000036e764 LR: c00000000036ee48 CTR: 0000000000000009 REGS: c000003fbcdcf810 TRAP: 0700 Tainted: G C E (4.15.0-10-generic) MSR: 9000000000029033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 24002222 XER: 20040000 CFAR: c00000000036ee44 SOFTE: 1 NIP __unmap_hugepage_range+0xa4/0x760 LR __unmap_hugepage_range_final+0x28/0x50 Call Trace: 0x7115e4e00000 (unreliable) __unmap_hugepage_range_final+0x28/0x50 unmap_single_vma+0x11c/0x190 unmap_vmas+0x94/0x140 exit_mmap+0x9c/0x1d0 mmput+0xa8/0x1d0 do_exit+0x360/0xc80 do_group_exit+0x60/0x100 SyS_exit_group+0x24/0x30 system_call+0x58/0x6c ---[ end trace ee88f958a1c62605 ]--- This bug was introduced by commit 31383c6865a5 ("mm, hugetlbfs: introduce ->split() to vm_operations_struct"). A split function was added to vm_operations_struct to determine if a mapping can be split. This was mostly for device-dax and hugetlbfs mappings which have specific alignment constraints. Mappings initiated via shmget/shmat have their original vm_ops overwritten with shm_vm_ops. shm_vm_ops functions will call back to the original vm_ops if needed. Add such a split function to shm_vm_ops. Link: http://lkml.kernel.org/r/20180321161314.7711-1-mike.kravetz@oracle.com Fixes: 31383c6865a5 ("mm, hugetlbfs: introduce ->split() to vm_operations_struct") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reported-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Reviewed-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Tested-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-03-28mm, slab: memcg_link the SLAB's kmem_cacheShakeel Butt
All the root caches are linked into slab_root_caches which was introduced by the commit 510ded33e075 ("slab: implement slab_root_caches list") but it missed to add the SLAB's kmem_cache. While experimenting with opt-in/opt-out kmem accounting, I noticed system crashes due to NULL dereference inside cache_from_memcg_idx() while deferencing kmem_cache.memcg_params.memcg_caches. The upstream clean kernel will not see these crashes but SLAB should be consistent with SLUB which does linked its boot caches (kmem_cache_node and kmem_cache) into slab_root_caches. Link: http://lkml.kernel.org/r/20180319210020.60289-1-shakeelb@google.com Fixes: 510ded33e075c ("slab: implement slab_root_caches list") Signed-off-by: Shakeel Butt <shakeelb@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Greg Thelen <gthelen@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-03-29Merge branch 'drm-misc-next-fixes' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-misc into drm-next - Mask mode type garbage from userspace (Ville) Something went wrong on the misc tree side, but I'll pull the patch directly. * 'drm-misc-next-fixes' of git://anongit.freedesktop.org/drm/drm-misc: drm: Fix uabi regression by allowing garbage mode->type from userspace
2018-03-28RDMA/CMA: remove RDMA_PS_SDPSteve Wise
This is no longer supported, so remove it. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-03-28RDMA/ucma: Introduce safer rdma_addr_size() variantsRoland Dreier
There are several places in the ucma ABI where userspace can pass in a sockaddr but set the address family to AF_IB. When that happens, rdma_addr_size() will return a size bigger than sizeof struct sockaddr_in6, and the ucma kernel code might end up copying past the end of a buffer not sized for a struct sockaddr_ib. Fix this by introducing new variants int rdma_addr_size_in6(struct sockaddr_in6 *addr); int rdma_addr_size_kss(struct __kernel_sockaddr_storage *addr); that are type-safe for the types used in the ucma ABI and return 0 if the size computed is bigger than the size of the type passed in. We can use these new variants to check what size userspace has passed in before copying any addresses. Reported-by: <syzbot+6800425d54ed3ed8135d@syzkaller.appspotmail.com> Signed-off-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-03-28ipc/shm: Fix pid freeing.Eric W. Biederman
The 0day kernel test build report reported an oops: > > IP: put_pid+0x22/0x5c > PGD 19efa067 P4D 19efa067 PUD 0 > Oops: 0000 [#1] > CPU: 0 PID: 727 Comm: trinity Not tainted 4.16.0-rc2-00010-g98f929b #1 > RIP: 0010:put_pid+0x22/0x5c > RSP: 0018:ffff986719f73e48 EFLAGS: 00010202 > RAX: 00000006d765f710 RBX: ffff98671a4fa4d0 RCX: ffff986719f73d40 > RDX: 000000006f6e6125 RSI: 0000000000000000 RDI: ffffffffa01e6d21 > RBP: ffffffffa0955fe0 R08: 0000000000000020 R09: 0000000000000000 > R10: 0000000000000078 R11: ffff986719f73e76 R12: 0000000000001000 > R13: 00000000ffffffea R14: 0000000054000fb0 R15: 0000000000000000 > FS: 00000000028c2880(0000) GS:ffffffffa06ad000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 0000000677846439 CR3: 0000000019fc1005 CR4: 00000000000606b0 > Call Trace: > ? ipc_update_pid+0x36/0x3e > ? newseg+0x34c/0x3a6 > ? ipcget+0x5d/0x528 > ? entry_SYSCALL_64_after_hwframe+0x52/0xb7 > ? SyS_shmget+0x5a/0x84 > ? do_syscall_64+0x194/0x1b3 > ? entry_SYSCALL_64_after_hwframe+0x42/0xb7 > Code: ff 05 e7 20 9b 03 58 c9 c3 48 ff 05 85 21 9b 03 48 85 ff 74 4f 8b 47 04 8b 17 48 ff 05 7c 21 9b 03 48 83 c0 03 48 c1 e0 04 ff ca <48> 8b 44 07 08 74 1f 48 ff 05 6c 21 9b 03 ff 0f 0f 94 c2 48 ff > RIP: put_pid+0x22/0x5c RSP: ffff986719f73e48 > CR2: 0000000677846439 > ---[ end trace ab8c5cb4389d37c5 ]--- > Kernel panic - not syncing: Fatal exception In newseg when changing shm_cprid and shm_lprid from pid_t to struct pid* I misread the kvmalloc as kvzalloc and thought shp was initialized to 0. As that is not the case it is not safe to for the error handling to address shm_cprid and shm_lprid before they are initialized. Therefore move the cleanup of shm_cprid and shm_lprid from the no_file error cleanup path to the no_id error cleanup path. Ensuring that an early error exit won't cause the oops above. Reported-by: kernel test robot <fengguang.wu@intel.com> Reviewed-by: Nagarathnam Muthusamy <nagarathnam.muthusamy@oracle.com> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
2018-03-28locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter()Peter Zijlstra
In -RT task_blocks_on_rt_mutex() may return with -EAGAIN due to (->pi_blocked_on == PI_WAKEUP_INPROGRESS) before it added itself as a waiter. In such a case remove_waiter() must not be called because without a waiter it will trigger the BUG_ON() statement. This was initially reported by Yimin Deng. Thomas Gleixner fixed it then with an explicit check for waiters before calling remove_waiter(). Instead of an explicit NULL check before calling rt_mutex_top_waiter() make the function return NULL if there are no waiters. With that fixed the now pointless NULL check is removed from rt_mutex_slowlock(). Reported-and-debugged-by: Yimin Deng <yimin11.deng@gmail.com> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/CAAh1qt=DCL9aUXNxanP5BKtiPp3m+qj4yB+gDohhXPVFCxWwzg@mail.gmail.com Link: https://lkml.kernel.org/r/20180327121438.sss7hxg3crqy4ecd@linutronix.de
2018-03-28Merge branch 'bpf-raw-tracepoints'Daniel Borkmann
Alexei Starovoitov says: ==================== v7->v8: - moved 'u32 num_args' from 'struct tracepoint' into 'struct bpf_raw_event_map' that increases memory overhead, but can be optimized/compressed later. Now it's zero changes in tracepoint.[ch] v6->v7: - adopted Steven's bpf_raw_tp_map section approach to find tracepoint and corresponding bpf probe function instead of kallsyms approach. dropped kernel_tracepoint_find_by_name() patch v5->v6: - avoid changing semantics of for_each_kernel_tracepoint() function, instead introduce kernel_tracepoint_find_by_name() helper v4->v5: - adopted Daniel's fancy REPEAT macro in bpf_trace.c in patch 6 v3->v4: - adopted Linus's CAST_TO_U64 macro to cast any integer, pointer, or small struct to u64. That nicely reduced the size of patch 1 v2->v3: - with Linus's suggestion introduced generic COUNT_ARGS and CONCATENATE macros (or rather moved them from apparmor) that cleaned up patch 6 - added patch 4 to refactor trace_iwlwifi_dev_ucode_error() from 17 args to 4 Now any tracepoint with >12 args will have build error v1->v2: - simplified api by combing bpf_raw_tp_open(name) + bpf_attach(prog_fd) into bpf_raw_tp_open(name, prog_fd) as suggested by Daniel. That simplifies bpf_detach as well which is now simple close() of fd. - fixed memory leak in error path which was spotted by Daniel. - fixed bpf_get_stackid(), bpf_perf_event_output() called from raw tracepoints - added more tests - fixed allyesconfig build caught by buildbot v1: This patch set is a different way to address the pressing need to access task_struct pointers in sched tracepoints from bpf programs. The first approach simply added these pointers to sched tracepoints: https://lkml.org/lkml/2017/12/14/753 which Peter nacked. Few options were discussed and eventually the discussion converged on doing bpf specific tracepoint_probe_register() probe functions. Details here: https://lkml.org/lkml/2017/12/20/929 Patch 1 is kernel wide cleanup of pass-struct-by-value into pass-struct-by-reference into tracepoints. Patches 2 and 3 are minor cleanups to address allyesconfig build Patch 4 refactor trace_iwlwifi_dev_ucode_error from 17 to 4 args Patch 5 introduces COUNT_ARGS macro Patch 6 introduces BPF_RAW_TRACEPOINT api. the auto-cleanup and multiple concurrent users are must have features of tracing api. For bpf raw tracepoints it looks like: // load bpf prog with BPF_PROG_TYPE_RAW_TRACEPOINT type prog_fd = bpf_prog_load(...); // receive anon_inode fd for given bpf_raw_tracepoint // and attach bpf program to it raw_tp_fd = bpf_raw_tracepoint_open("xdp_exception", prog_fd); Ctrl-C of tracing daemon or cmdline tool will automatically detach bpf program, unload it and unregister tracepoint probe. More details in patch 6. Patch 7 - trivial support in libbpf Patches 8, 9 - user space tests samples/bpf/test_overhead performance on 1 cpu: tracepoint base kprobe+bpf tracepoint+bpf raw_tracepoint+bpf task_rename 1.1M 769K 947K 1.0M urandom_read 789K 697K 750K 755K ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28selftests/bpf: test for bpf_get_stackid() from raw tracepointsAlexei Starovoitov
similar to traditional traceopint test add bpf_get_stackid() test from raw tracepoints and reduce verbosity of existing stackmap test Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28samples/bpf: raw tracepoint testAlexei Starovoitov
add empty raw_tracepoint bpf program to test overhead similar to kprobe and traditional tracepoint tests Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28libbpf: add bpf_raw_tracepoint_open helperAlexei Starovoitov
add bpf_raw_tracepoint_open(const char *name, int prog_fd) api to libbpf Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28bpf: introduce BPF_RAW_TRACEPOINTAlexei Starovoitov
Introduce BPF_PROG_TYPE_RAW_TRACEPOINT bpf program type to access kernel internal arguments of the tracepoints in their raw form. >From bpf program point of view the access to the arguments look like: struct bpf_raw_tracepoint_args { __u64 args[0]; }; int bpf_prog(struct bpf_raw_tracepoint_args *ctx) { // program can read args[N] where N depends on tracepoint // and statically verified at program load+attach time } kprobe+bpf infrastructure allows programs access function arguments. This feature allows programs access raw tracepoint arguments. Similar to proposed 'dynamic ftrace events' there are no abi guarantees to what the tracepoints arguments are and what their meaning is. The program needs to type cast args properly and use bpf_probe_read() helper to access struct fields when argument is a pointer. For every tracepoint __bpf_trace_##call function is prepared. In assembler it looks like: (gdb) disassemble __bpf_trace_xdp_exception Dump of assembler code for function __bpf_trace_xdp_exception: 0xffffffff81132080 <+0>: mov %ecx,%ecx 0xffffffff81132082 <+2>: jmpq 0xffffffff811231f0 <bpf_trace_run3> where TRACE_EVENT(xdp_exception, TP_PROTO(const struct net_device *dev, const struct bpf_prog *xdp, u32 act), The above assembler snippet is casting 32-bit 'act' field into 'u64' to pass into bpf_trace_run3(), while 'dev' and 'xdp' args are passed as-is. All of ~500 of __bpf_trace_*() functions are only 5-10 byte long and in total this approach adds 7k bytes to .text. This approach gives the lowest possible overhead while calling trace_xdp_exception() from kernel C code and transitioning into bpf land. Since tracepoint+bpf are used at speeds of 1M+ events per second this is valuable optimization. The new BPF_RAW_TRACEPOINT_OPEN sys_bpf command is introduced that returns anon_inode FD of 'bpf-raw-tracepoint' object. The user space looks like: // load bpf prog with BPF_PROG_TYPE_RAW_TRACEPOINT type prog_fd = bpf_prog_load(...); // receive anon_inode fd for given bpf_raw_tracepoint with prog attached raw_tp_fd = bpf_raw_tracepoint_open("xdp_exception", prog_fd); Ctrl-C of tracing daemon or cmdline tool that uses this feature will automatically detach bpf program, unload it and unregister tracepoint probe. On the kernel side the __bpf_raw_tp_map section of pointers to tracepoint definition and to __bpf_trace_*() probe function is used to find a tracepoint with "xdp_exception" name and corresponding __bpf_trace_xdp_exception() probe function which are passed to tracepoint_probe_register() to connect probe with tracepoint. Addition of bpf_raw_tracepoint doesn't interfere with ftrace and perf tracepoint mechanisms. perf_event_open() can be used in parallel on the same tracepoint. Multiple bpf_raw_tracepoint_open("xdp_exception", prog_fd) are permitted. Each with its own bpf program. The kernel will execute all tracepoint probes and all attached bpf programs. In the future bpf_raw_tracepoints can be extended with query/introspection logic. __bpf_raw_tp_map section logic was contributed by Steven Rostedt Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28macro: introduce COUNT_ARGS() macroAlexei Starovoitov
move COUNT_ARGS() macro from apparmor to generic header and extend it to count till twelve. COUNT() was an alternative name for this logic, but it's used for different purpose in many other places. Similarly for CONCATENATE() macro. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28net/wireless/iwlwifi: fix iwlwifi_dev_ucode_error tracepointAlexei Starovoitov
fix iwlwifi_dev_ucode_error tracepoint to pass pointer to a table instead of all 17 arguments by value. dvm/main.c and mvm/utils.c have 'struct iwl_error_event_table' defined with very similar yet subtly different fields and offsets. tracepoint is still common and using definition of 'struct iwl_error_event_table' from dvm/commands.h while copying fields. Long term this tracepoint probably should be split into two. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28net/mac802154: disambiguate mac80215 vs mac802154 trace eventsAlexei Starovoitov
two trace events defined with the same name and both unused. They conflict in allyesconfig build. Rename one of them. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28net/mediatek: disambiguate mt76 vs mt7601u trace eventsAlexei Starovoitov
two trace events defined with the same name and both unused. They conflict in allyesconfig build. Rename one of them. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28treewide: remove large struct-pass-by-value from tracepoint argumentsAlexei Starovoitov
- fix trace_hfi1_ctxt_info() to pass large struct by reference instead of by value - convert 'type array[]' tracepoint arguments into 'type *array', since compiler will warn that sizeof('type array[]') == sizeof('type *array') and later should be used instead The CAST_TO_U64 macro in the later patch will enforce that tracepoint arguments can only be integers, pointers, or less than 8 byte structures. Larger structures should be passed by reference. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-28KVM: nVMX: Optimization: Dont set KVM_REQ_EVENT when VMExit with ↵Liran Alon
nested_run_pending When vCPU runs L2 and there is a pending event that requires to exit from L2 to L1 and nested_run_pending=1, vcpu_enter_guest() will request an immediate-exit from L2 (See req_immediate_exit). Since now handling of req_immediate_exit also makes sure to set KVM_REQ_EVENT, there is no need to also set it on vmx_vcpu_run() when nested_run_pending=1. This optimizes cases where VMRESUME was executed by L1 to enter L2 and there is no pending events that require exit from L2 to L1. Previously, this would have set KVM_REQ_EVENT unnecessarly. Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: nVMX: Require immediate-exit when event reinjected to L2 and L1 event ↵Liran Alon
pending In case L2 VMExit to L0 during event-delivery, VMCS02 is filled with IDT-vectoring-info which vmx_complete_interrupts() makes sure to reinject before next resume of L2. While handling the VMExit in L0, an IPI could be sent by another L1 vCPU to the L1 vCPU which currently runs L2 and exited to L0. When L0 will reach vcpu_enter_guest() and call inject_pending_event(), it will note that a previous event was re-injected to L2 (by IDT-vectoring-info) and therefore won't check if there are pending L1 events which require exit from L2 to L1. Thus, L0 enters L2 without immediate VMExit even though there are pending L1 events! This commit fixes the issue by making sure to check for L1 pending events even if a previous event was reinjected to L2 and bailing out from inject_pending_event() before evaluating a new pending event in case an event was already reinjected. The bug was observed by the following setup: * L0 is a 64CPU machine which runs KVM. * L1 is a 16CPU machine which runs KVM. * L0 & L1 runs with APICv disabled. (Also reproduced with APICv enabled but easier to analyze below info with APICv disabled) * L1 runs a 16CPU L2 Windows Server 2012 R2 guest. During L2 boot, L1 hangs completely and analyzing the hang reveals that one L1 vCPU is holding KVM's mmu_lock and is waiting forever on an IPI that he has sent for another L1 vCPU. And all other L1 vCPUs are currently attempting to grab mmu_lock. Therefore, all L1 vCPUs are stuck forever (as L1 runs with kernel-preemption disabled). Observing /sys/kernel/debug/tracing/trace_pipe reveals the following series of events: (1) qemu-system-x86-19066 [030] kvm_nested_vmexit: rip: 0xfffff802c5dca82f reason: EPT_VIOLATION ext_inf1: 0x0000000000000182 ext_inf2: 0x00000000800000d2 ext_int: 0x00000000 ext_int_err: 0x00000000 (2) qemu-system-x86-19054 [028] kvm_apic_accept_irq: apicid f vec 252 (Fixed|edge) (3) qemu-system-x86-19066 [030] kvm_inj_virq: irq 210 (4) qemu-system-x86-19066 [030] kvm_entry: vcpu 15 (5) qemu-system-x86-19066 [030] kvm_exit: reason EPT_VIOLATION rip 0xffffe00069202690 info 83 0 (6) qemu-system-x86-19066 [030] kvm_nested_vmexit: rip: 0xffffe00069202690 reason: EPT_VIOLATION ext_inf1: 0x0000000000000083 ext_inf2: 0x0000000000000000 ext_int: 0x00000000 ext_int_err: 0x00000000 (7) qemu-system-x86-19066 [030] kvm_nested_vmexit_inject: reason: EPT_VIOLATION ext_inf1: 0x0000000000000083 ext_inf2: 0x0000000000000000 ext_int: 0x00000000 ext_int_err: 0x00000000 (8) qemu-system-x86-19066 [030] kvm_entry: vcpu 15 Which can be analyzed as follows: (1) L2 VMExit to L0 on EPT_VIOLATION during delivery of vector 0xd2. Therefore, vmx_complete_interrupts() will set KVM_REQ_EVENT and reinject a pending-interrupt of 0xd2. (2) L1 sends an IPI of vector 0xfc (CALL_FUNCTION_VECTOR) to destination vCPU 15. This will set relevant bit in LAPIC's IRR and set KVM_REQ_EVENT. (3) L0 reach vcpu_enter_guest() which calls inject_pending_event() which notes that interrupt 0xd2 was reinjected and therefore calls vmx_inject_irq() and returns. Without checking for pending L1 events! Note that at this point, KVM_REQ_EVENT was cleared by vcpu_enter_guest() before calling inject_pending_event(). (4) L0 resumes L2 without immediate-exit even though there is a pending L1 event (The IPI pending in LAPIC's IRR). We have already reached the buggy scenario but events could be furthered analyzed: (5+6) L2 VMExit to L0 on EPT_VIOLATION. This time not during event-delivery. (7) L0 decides to forward the VMExit to L1 for further handling. (8) L0 resumes into L1. Note that because KVM_REQ_EVENT is cleared, the LAPIC's IRR is not examined and therefore the IPI is still not delivered into L1! Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: x86: Fix misleading comments on handling pending exceptionsLiran Alon
The reason that exception.pending should block re-injection of NMI/interrupt is not described correctly in comment in code. Instead, it describes why a pending exception should be injected before a pending NMI/interrupt. Therefore, move currently present comment to code-block evaluating a new pending event which explains why exception.pending is evaluated first. In addition, create a new comment describing that exception.pending blocks re-injection of NMI/interrupt because the exception was queued by handling vmexit which was due to NMI/interrupt delivery. Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@orcle.com> [Used a comment from Sean J <sean.j.christopherson@intel.com>. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: x86: Rename interrupt.pending to interrupt.injectedLiran Alon
For exceptions & NMIs events, KVM code use the following coding convention: *) "pending" represents an event that should be injected to guest at some point but it's side-effects have not yet occurred. *) "injected" represents an event that it's side-effects have already occurred. However, interrupts don't conform to this coding convention. All current code flows mark interrupt.pending when it's side-effects have already taken place (For example, bit moved from LAPIC IRR to ISR). Therefore, it makes sense to just rename interrupt.pending to interrupt.injected. This change follows logic of previous commit 664f8e26b00c ("KVM: X86: Fix loss of exception which has not yet been injected") which changed exception to follow this coding convention as well. It is important to note that in case !lapic_in_kernel(vcpu), interrupt.pending usage was and still incorrect. In this case, interrrupt.pending can only be set using one of the following ioctls: KVM_INTERRUPT, KVM_SET_VCPU_EVENTS and KVM_SET_SREGS. Looking at how QEMU uses these ioctls, one can see that QEMU uses them either to re-set an "interrupt.pending" state it has received from KVM (via KVM_GET_VCPU_EVENTS interrupt.pending or via KVM_GET_SREGS interrupt_bitmap) or by dispatching a new interrupt from QEMU's emulated LAPIC which reset bit in IRR and set bit in ISR before sending ioctl to KVM. So it seems that indeed "interrupt.pending" in this case is also suppose to represent "interrupt.injected". However, kvm_cpu_has_interrupt() & kvm_cpu_has_injectable_intr() is misusing (now named) interrupt.injected in order to return if there is a pending interrupt. This leads to nVMX/nSVM not be able to distinguish if it should exit from L2 to L1 on EXTERNAL_INTERRUPT on pending interrupt or should re-inject an injected interrupt. Therefore, add a FIXME at these functions for handling this issue. This patch introduce no semantics change. Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: VMX: No need to clear pending NMI/interrupt on inject realmode interruptLiran Alon
kvm_inject_realmode_interrupt() is called from one of the injection functions which writes event-injection to VMCS: vmx_queue_exception(), vmx_inject_irq() and vmx_inject_nmi(). All these functions are called just to cause an event-injection to guest. They are not responsible of manipulating the event-pending flag. The only purpose of kvm_inject_realmode_interrupt() should be to emulate real-mode interrupt-injection. This was also incorrect when called from vmx_queue_exception(). Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28x86/kvm: use Enlightened VMCS when running on Hyper-VVitaly Kuznetsov
Enlightened VMCS is just a structure in memory, the main benefit besides avoiding somewhat slower VMREAD/VMWRITE is using clean field mask: we tell the underlying hypervisor which fields were modified since VMEXIT so there's no need to inspect them all. Tight CPUID loop test shows significant speedup: Before: 18890 cycles After: 8304 cycles Static key is being used to avoid performance penalty for non-Hyper-V deployments. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28x86/hyper-v: detect nested featuresVitaly Kuznetsov
TLFS 5.0 says: "Support for an enlightened VMCS interface is reported with CPUID leaf 0x40000004. If an enlightened VMCS interface is supported, additional nested enlightenments may be discovered by reading the CPUID leaf 0x4000000A (see 2.4.11)." Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28x86/hyper-v: define struct hv_enlightened_vmcs and clean field bitsVitaly Kuznetsov
The definitions are according to the Hyper-V TLFS v5.0. KVM on Hyper-V will use these. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28x86/hyper-v: allocate and use Virtual Processor Assist PagesVitaly Kuznetsov
Virtual Processor Assist Pages usage allows us to do optimized EOI processing for APIC, enable Enlightened VMCS support in KVM and more. struct hv_vp_assist_page is defined according to the Hyper-V TLFS v5.0b. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28x86/kvm: rename HV_X64_MSR_APIC_ASSIST_PAGE to HV_X64_MSR_VP_ASSIST_PAGELadi Prosek
The assist page has been used only for the paravirtual EOI so far, hence the "APIC" in the MSR name. Renaming to match the Hyper-V TLFS where it's called "Virtual VP Assist MSR". Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28x86/hyper-v: move definitions from TLFS to hyperv-tlfs.hVitaly Kuznetsov
mshyperv.h now only contains fucntions/variables we define in kernel, all definitions from TLFS should go to hyperv-tlfs.h. 'enum hv_cpuid_function' is removed as we already have this info in hyperv-tlfs.h, code in mshyperv.c is adjusted accordingly. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28x86/hyper-v: move hyperv.h out of uapiVitaly Kuznetsov
hyperv.h is not part of uapi, there are no (known) users outside of kernel. We are making changes to this file to match current Hyper-V Hypervisor Top-Level Functional Specification (TLFS, see: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs) and we don't want to maintain backwards compatibility. Move the file renaming to hyperv-tlfs.h to avoid confusing it with mshyperv.h. In future, all definitions from TLFS should go to it and all kernel objects should go to mshyperv.h or include/linux/hyperv.h. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: trivial documentation cleanupsAndrew Jones
Add missing entries to the index and ensure the entries are in alphabetical order. Also amd-memory-encryption.rst is an .rst not a .txt. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: X86: Fix setup the virt_spin_lock_key before static key get initializedWanpeng Li
static_key_disable_cpuslocked(): static key 'virt_spin_lock_key+0x0/0x20' used before call to jump_label_init() WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:161 static_key_disable_cpuslocked+0x61/0x80 RIP: 0010:static_key_disable_cpuslocked+0x61/0x80 Call Trace: static_key_disable+0x16/0x20 start_kernel+0x192/0x4b3 secondary_startup_64+0xa5/0xb0 Qspinlock will be choosed when dedicated pCPUs are available, however, the static virt_spin_lock_key is set in kvm_spinlock_init() before jump_label_init() has been called, which will result in a WARN(). This patch fixes it by delaying the virt_spin_lock_key setup to .smp_prepare_cpus(). Reported-by: Davidlohr Bueso <dbueso@suse.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Fixes: b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated physical CPUs are available") Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28tools/kvm_stat: Remove unused functionCole Robinson
Unused since added in 18e8f4100 Signed-off-by: Cole Robinson <crobinso@redhat.com> Reviewed-and-tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28tools/kvm_stat: Don't use deprecated file()Cole Robinson
$ python3 tools/kvm/kvm_stat/kvm_stat Traceback (most recent call last): File "tools/kvm/kvm_stat/kvm_stat", line 1668, in <module> main() File "tools/kvm/kvm_stat/kvm_stat", line 1639, in main assign_globals() File "tools/kvm/kvm_stat/kvm_stat", line 1618, in assign_globals for line in file('/proc/mounts'): NameError: name 'file' is not defined open() is the python3 way, and works on python2.6+ Signed-off-by: Cole Robinson <crobinso@redhat.com> Reviewed-and-tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28tools/kvm_stat: Fix python3 syntaxCole Robinson
$ python3 tools/kvm/kvm_stat/kvm_stat File "tools/kvm/kvm_stat/kvm_stat", line 1137 def sortkey((_k, v)): ^ SyntaxError: invalid syntax Fix it in a way that's compatible with python2 and python3 Signed-off-by: Cole Robinson <crobinso@redhat.com> Tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: SVM: Implement pause loop exit logic in SVMBabu Moger
Bring the PLE(pause loop exit) logic to AMD svm driver. While testing, we found this helping in situations where numerous pauses are generated. Without these patches we could see continuos VMEXITS due to pause interceptions. Tested it on AMD EPYC server with boot parameter idle=poll on a VM with 32 vcpus to simulate extensive pause behaviour. Here are VMEXITS in 10 seconds interval. Pauses 810199 504 Total 882184 325415 Signed-off-by: Babu Moger <babu.moger@amd.com> [Prevented the window from dropping below the initial value. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: SVM: Add pause filter thresholdBabu Moger
This patch adds the support for pause filtering threshold. This feature support is indicated by CPUID Fn8000_000A_EDX. See AMD APM Vol 2 Section 15.14.4 Pause Intercept Filtering for more details. In this mode, a 16-bit pause filter threshold field is added in VMCB. The threshold value is a cycle count that is used to reset the pause counter. As with simple pause filtering, VMRUN loads the pause count value from VMCB into an internal counter. Then, on each pause instruction the hardware checks the elapsed number of cycles since the most recent pause instruction against the pause Filter Threshold. If the elapsed cycle count is greater than the pause filter threshold, then the internal pause count is reloaded from VMCB and execution continues. If the elapsed cycle count is less than the pause filter threshold, then the internal pause count is decremented. If the count value is less than zero and pause intercept is enabled, a #VMEXIT is triggered. If advanced pause filtering is supported and pause filter threshold field is set to zero, the filter will operate in the simpler, count only mode. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: VMX: Bring the common code to header fileBabu Moger
This patch brings some of the code from vmx to x86.h header file. Now, we can share this code between vmx and svm. Modified couple functions to make it common. Signed-off-by: Babu Moger <babu.moger@amd.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: VMX: Remove ple_window_actual_maxBabu Moger
Get rid of ple_window_actual_max, because its benefits are really minuscule and the logic is complicated. The overflows(and underflow) are controlled in __ple_window_grow and _ple_window_shrink respectively. Suggested-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Babu Moger <babu.moger@amd.com> [Fixed potential wraparound and change the max to UINT_MAX. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28KVM: VMX: Fix the module parameters for vmxBabu Moger
The vmx module parameters are supposed to be unsigned variants. Also fixed the checkpatch errors like the one below. WARNING: Symbolic permissions 'S_IRUGO' are not preferred. Consider using octal permissions '0444'. +module_param(ple_gap, uint, S_IRUGO); Signed-off-by: Babu Moger <babu.moger@amd.com> [Expanded uint to unsigned int in code. - Radim] Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2018-03-28ARM: 8751/1: Add support for Cortex-R8 processorLuca Scalabrino
Cortex-R8 has identical initialisation requirements to Cortex-R7, so hook it up in proc-v7.S in the same way. Signed-off-by: Luca Scalabrino <luca.scalabrino@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-03-28ARM: 8749/1: Kconfig: Add ARCH_HAS_FORTIFY_SOURCEJinbum Park
CONFIG_FORTIFY_SOURCE detects various overflows at compile-time. (6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions) ARCH_HAS_FORTIFY_SOURCE means that the architecture can be built and run with CONFIG_FORTIFY_SOURCE. Since ARM can be built and run with that flag like other architectures, select ARCH_HAS_FORTIFY_SOURCE as default. Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jinbum Park <jinb.park7@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-03-28f2fs: reserve bits for fs-verityEric Biggers
Reserve an F2FS feature flag and inode flag for fs-verity. This is an in-development feature that is planned be discussed at LSF/MM 2018 [1]. It will provide file-based integrity and authenticity for read-only files. Most code will be in a filesystem-independent module, with smaller changes needed to individual filesystems that opt-in to supporting the feature. An early prototype supporting F2FS is available [2]. Reserving the F2FS on-disk bits for fs-verity will prevent users of the prototype from conflicting with other new F2FS features. Note that we're reserving the inode flag in f2fs_inode.i_advise, which isn't really appropriate since it's not a hint or advice. But ->i_advise is already being used to hold the 'encrypt' flag; and F2FS's ->i_flags uses the generic FS_* values, so it seems ->i_flags can't be used for an F2FS-specific flag without additional work to remove the assumption that ->i_flags uses the generic flags namespace. [1] https://marc.info/?l=linux-fsdevel&m=151690752225644 [2] https://git.kernel.org/pub/scm/linux/kernel/git/mhalcrow/linux.git/log/?h=fs-verity-dev Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-03-28iwlwifi: wrt: add fw force restart via triggersShahar S Matityahu
We can set triggers that cause a debug data collection when something of interest happens (e.g. when too many probes are lost conscutively). Normally, this triggers don't cause the FW to be restarted, but in some cases that may be desired, so we recover from the problem. To support this, add a flag that indicates that the FW should be restarted when the trigger fires. Signed-off-by: Shahar S Matityahu <shahar.s.matityahu@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-03-28iwlwifi: mvm: save low latency causes in an enumSara Sharon
Currently we have a boolean variable for each cause. This costs space, and requires to check each separately when determining low latency. Since we have another cause incoming, convert it to an enum. While at it, move the retrieval of the prev value and the assignment of the new value to be inside iwl_mvm_update_low_latency and save the need for each caller to do it separately. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-03-28iwlwifi: bump the max API version for 9000 and 22000 devicesEmmanuel Grumbach
We are now ready to load 38.ucode Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-03-28iwlwifi: api: Add geographic profile information to MCC_UPDATE_CMDHaim Dreyfuss
Some geographic profiles require specific handling. For example ETSI profile requires special channel access handling. Add geographic profile information to MCC_UPDATE response to allow it. Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>