summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2017-12-18media: media-entity.h: add kernel-doc markups for nested structsMauro Carvalho Chehab
Now that nested structs are parsed by kernel-doc, add markups to them. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: videobuf2-core: improve kernel-doc markupsMauro Carvalho Chehab
Now that nested structs are supported, change the documentation to use it. While here, add cross-references where pertinent and use monotonic fonts where pertinent, using the right markup tags. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: v4l2-ctrls: document nested members of structsMauro Carvalho Chehab
There are a few nested members at v4l2-ctrls.h. Now that kernel-doc supports, document them. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: v4l2-fwnode.h: better describe bus union at fwnode endpoint structMauro Carvalho Chehab
Better document the bus union at struct v4l2_fwnode_endpoint. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: rc-core.h: minor adjustments at rc_driver_type docMauro Carvalho Chehab
The description of this enum doesn't match what it actually represents. Adjust it. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18virt: Add vboxguest VMMDEV communication codeHans de Goede
This commits adds a header describing the hardware interface for the Virtual Box Guest PCI device used in Virtual Box virtual machines and utility functions for talking to the Virtual Box hypervisor over this interface. These utility functions will used both by the vboxguest driver for the PCI device which offers the /dev/vboxguest ioctl API and by the vboxfs driver which offers folder sharing support. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-18virt: Add vboxguest driver for Virtual Box Guest integration UAPIHans de Goede
This commit adds the headers describing the ioctl API for the /dev/vboxguest device used by the Virtual Box Guest Additions in Virtual Box virtual machines. The driver providing the /dev/vboxguest device will allow Virtual Box Guest Additions features such as copy-and-paste, seamless mode and OpenGL pass-through. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-18media: v4l2-dv-timings.h: convert comment into kernel-doc markupMauro Carvalho Chehab
The can_reduce_fps() is already documented, but it is not using the kernel-doc markup. Convert it, in order to generate documentation from it. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Acked-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: v4l2-common.h: document helper functionsMauro Carvalho Chehab
There are several helper functions that aren't documented. Document them. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: v4l2-common: get rid of struct v4l2_discrete_probeMauro Carvalho Chehab
This struct is there just two store two arguments of v4l2_find_nearest_format(). The other two arguments are passed as parameter. IMHO, there isn't much sense on doing that, and that will just add one more struct to document ;) So, let's get rid of the struct, passing the parameters directly. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Acked-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: v4l2-common: get rid of v4l2_routing dead structMauro Carvalho Chehab
This struct is not used anymore. Get rid of it and update the documentation about what should still be converted. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Acked-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: tuner-types: add kernel-doc markups for struct tunertypeMauro Carvalho Chehab
This struct is lacking documentation. Add it. Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18media: frontend: describe nested structsMauro Carvalho Chehab
There are some nested structs on this header, with aren't properly document them. This should solve some warnings after the addition of a patche at kernel-doc adding support for nested structs/unions. Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-12-18Merge tag 'kvm-arm-fixes-for-v4.15-2' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/ARM Fixes for v4.15, Round 2 Fixes: - A bug in our handling of SPE state for non-vhe systems - A bug that causes hyp unmapping to go off limits and crash the system on shutdown - Three timer fixes that were introduced as part of the timer optimizations for v4.15
2017-12-18KVM: Fix stack-out-of-bounds read in write_mmioWanpeng Li
Reported by syzkaller: BUG: KASAN: stack-out-of-bounds in write_mmio+0x11e/0x270 [kvm] Read of size 8 at addr ffff8803259df7f8 by task syz-executor/32298 CPU: 6 PID: 32298 Comm: syz-executor Tainted: G OE 4.15.0-rc2+ #18 Hardware name: LENOVO ThinkCentre M8500t-N000/SHARKBAY, BIOS FBKTC1AUS 02/16/2016 Call Trace: dump_stack+0xab/0xe1 print_address_description+0x6b/0x290 kasan_report+0x28a/0x370 write_mmio+0x11e/0x270 [kvm] emulator_read_write_onepage+0x311/0x600 [kvm] emulator_read_write+0xef/0x240 [kvm] emulator_fix_hypercall+0x105/0x150 [kvm] em_hypercall+0x2b/0x80 [kvm] x86_emulate_insn+0x2b1/0x1640 [kvm] x86_emulate_instruction+0x39a/0xb90 [kvm] handle_exception+0x1b4/0x4d0 [kvm_intel] vcpu_enter_guest+0x15a0/0x2640 [kvm] kvm_arch_vcpu_ioctl_run+0x549/0x7d0 [kvm] kvm_vcpu_ioctl+0x479/0x880 [kvm] do_vfs_ioctl+0x142/0x9a0 SyS_ioctl+0x74/0x80 entry_SYSCALL_64_fastpath+0x23/0x9a The path of patched vmmcall will patch 3 bytes opcode 0F 01 C1(vmcall) to the guest memory, however, write_mmio tracepoint always prints 8 bytes through *(u64 *)val since kvm splits the mmio access into 8 bytes. This leaks 5 bytes from the kernel stack (CVE-2017-17741). This patch fixes it by just accessing the bytes which we operate on. Before patch: syz-executor-5567 [007] .... 51370.561696: kvm_mmio: mmio write len 3 gpa 0x10 val 0x1ffff10077c1010f After patch: syz-executor-13416 [002] .... 51302.299573: kvm_mmio: mmio write len 3 gpa 0x10 val 0xc1010f Reported-by: Dmitry Vyukov <dvyukov@google.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-12-18KVM: arm/arm64: timer: Don't set irq as forwarded if no usable GICMarc Zyngier
If we don't have a usable GIC, do not try to set the vcpu affinity as this is guaranteed to fail. Reported-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-12-18mtd: cfi: convert inline functions to macrosArnd Bergmann
The map_word_() functions, dating back to linux-2.6.8, try to perform bitwise operations on a 'map_word' structure. This may have worked with compilers that were current then (gcc-3.4 or earlier), but end up being rather inefficient on any version I could try now (gcc-4.4 or higher). Specifically we hit a problem analyzed in gcc PR81715 where we fail to reuse the stack space for local variables. This can be seen immediately in the stack consumption for cfi_staa_erase_varsize() and other functions that (with CONFIG_KASAN) can be up to 2200 bytes. Changing the inline functions into macros brings this down to 1280 bytes. Without KASAN, the same problem exists, but the stack consumption is lower to start with, my patch shrinks it from 920 to 496 bytes on with arm-linux-gnueabi-gcc-5.4, and saves around 1KB in .text size for cfi_cmdset_0020.c, as it avoids copying map_word structures for each call to one of these helpers. With the latest gcc-8 snapshot, the problem is fixed in upstream gcc, but nobody uses that yet, so we should still work around it in mainline kernels and probably backport the workaround to stable kernels as well. We had a couple of other functions that suffered from the same gcc bug, and all of those had a simpler workaround involving dummy variables in the inline function. Unfortunately that did not work here, the macro hack was the best I could come up with. It would also be helpful to have someone to a little performance testing on the patch, to see how much it helps in terms of CPU utilitzation. Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 Cc: stable@vger.kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Richard Weinberger <richard@nod.at> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
2017-12-18Merge 4.15-rc4 into staging-nextGreg Kroah-Hartman
We want the staging fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-18Merge 4.15-rc4 into usb-nextGreg Kroah-Hartman
We want the USB fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-18Merge tag 'drm-misc-next-2017-12-14' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-misc into drm-next drm-misc-next for 4.16: Cross-subsystem Changes: - Documentation for amlogic dt dt-bindings Core Changes: - Update edid-derived drm_display_info fields at edid property set Driver Changes: - A bunch of clean up from Noralf, including the last patches to reduce fbdev emulation footprint. * tag 'drm-misc-next-2017-12-14' of git://anongit.freedesktop.org/drm/drm-misc: (30 commits) drm/atomic-helper: Make zpos property kerneldoc less misleading drm: Update edid-derived drm_display_info fields at edid property set [v2] MAINTAINERS: Remove Jani as drm-misc co-maintainer drm/tinydrm: Use drm_fb_cma_fbdev_init_with_funcs/fini() drm/arm/mali: Use drm_fb_cma_fbdev_init/fini() drm/zte: Use drm_fb_cma_fbdev_init/fini() drm/vc4: Use drm_fb_cma_fbdev_init/fini() drm/tve200: Use drm_fb_cma_fbdev_init/fini() drm/tilcdc: Use drm_fb_cma_fbdev_init/fini() drm/sun4i: Use drm_fb_cma_fbdev_init/fini() drm/stm: Use drm_fb_cma_fbdev_init/fini() drm/sti: Use drm_fb_cma_fbdev_init/fini() drm/pl111: Use drm_fb_cma_fbdev_init/fini() drm/imx: Use drm_fb_cma_fbdev_init/fini() drm/atmel-hlcdc: Use drm_fb_cma_fbdev_init/fini() drm/cma-helper: Add drm_fb_cma_fbdev_init/fini() drm/gem-fb-helper: drm_gem_fbdev_fb_create() make funcs optional drm/tegra: Use drm_fb_helper_lastclose() and _poll_changed() drm/rockchip: Use drm_fb_helper_lastclose() and _poll_changed() drm/omap: Use drm_fb_helper_lastclose() and _poll_changed() ...
2017-12-17ARM: 8726/1: B15: Add CPU hotplug awarenessFlorian Fainelli
The Broadcom Brahma-B15 readahead cache needs to be disabled, respectively re-enable during a CPU hotplug. In case we were not to do, CPU hotplug would occasionally fail with random crashes when a given CPU exits the coherency domain while the RAC is still enabled, as it would get stale data from the RAC. In order to avoid adding any specific B15 readahead-cache awareness to arch/arm/mach-bcm/hotplug-brcmstb.c we use a CPU hotplug state machine which allows us to catch CPU hotplug events and disable/flush enable the RAC accordingly. Signed-off-by: Alamy Liu <alamyliu@broadcom.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-12-17Merge branch 'WIP.x86-pti.base.prep-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull Page Table Isolation (PTI) preparatory tree from Ingo Molnar: "This does a rename to free up linux/pti.h to be used by the upcoming page table isolation feature" * 'WIP.x86-pti.base.prep-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: drivers/misc/intel/pti: Rename the header file to free up the namespace
2017-12-17bpf: x64: add JIT support for multi-function programsAlexei Starovoitov
Typical JIT does several passes over bpf instructions to compute total size and relative offsets of jumps and calls. With multitple bpf functions calling each other all relative calls will have invalid offsets intially therefore we need to additional last pass over the program to emit calls with correct offsets. For example in case of three bpf functions: main: call foo call bpf_map_lookup exit foo: call bar exit bar: exit We will call bpf_int_jit_compile() indepedently for main(), foo() and bar() x64 JIT typically does 4-5 passes to converge. After these initial passes the image for these 3 functions will be good except call targets, since start addresses of foo() and bar() are unknown when we were JITing main() (note that call bpf_map_lookup will be resolved properly during initial passes). Once start addresses of 3 functions are known we patch call_insn->imm to point to right functions and call bpf_int_jit_compile() again which needs only one pass. Additional safety checks are done to make sure this last pass doesn't produce image that is larger or smaller than previous pass. When constant blinding is on it's applied to all functions at the first pass, since doing it once again at the last pass can change size of the JITed code. Tested on x64 and arm64 hw with JIT on/off, blinding on/off. x64 jits bpf-to-bpf calls correctly while arm64 falls back to interpreter. All other JITs that support normal BPF_CALL will behave the same way since bpf-to-bpf call is equivalent to bpf-to-kernel call from JITs point of view. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-17bpf: fix net.core.bpf_jit_enable raceAlexei Starovoitov
global bpf_jit_enable variable is tested multiple times in JITs, blinding and verifier core. The malicious root can try to toggle it while loading the programs. This race condition was accounted for and there should be no issues, but it's safer to avoid this race condition. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-17bpf: add support for bpf_call to interpreterAlexei Starovoitov
though bpf_call is still the same call instruction and calling convention 'bpf to bpf' and 'bpf to helper' is the same the interpreter has to oparate on 'struct bpf_insn *'. To distinguish these two cases add a kernel internal opcode and mark call insns with it. This opcode is seen by interpreter only. JITs will never see it. Also add tiny bit of debug code to aid interpreter debugging. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-17bpf: teach verifier to recognize zero initialized stackAlexei Starovoitov
programs with function calls are often passing various pointers via stack. When all calls are inlined llvm flattens stack accesses and optimizes away extra branches. When functions are not inlined it becomes the job of the verifier to recognize zero initialized stack to avoid exploring paths that program will not take. The following program would fail otherwise: ptr = &buffer_on_stack; *ptr = 0; ... func_call(.., ptr, ...) { if (..) *ptr = bpf_map_lookup(); } ... if (*ptr != 0) { // Access (*ptr)->field is valid. // Without stack_zero tracking such (*ptr)->field access // will be rejected } since stack slots are no longer uniform invalid | spill | misc add liveness marking to all slots, but do it in 8 byte chunks. So if nothing was read or written in [fp-16, fp-9] range it will be marked as LIVE_NONE. If any byte in that range was read, it will be marked LIVE_READ and stacksafe() check will perform byte-by-byte verification. If all bytes in the range were written the slot will be marked as LIVE_WRITTEN. This significantly speeds up state equality comparison and reduces total number of states processed. before after bpf_lb-DLB_L3.o 2051 2003 bpf_lb-DLB_L4.o 3287 3164 bpf_lb-DUNKNOWN.o 1080 1080 bpf_lxc-DDROP_ALL.o 24980 12361 bpf_lxc-DUNKNOWN.o 34308 16605 bpf_netdev.o 15404 10962 bpf_overlay.o 7191 6679 Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-17bpf: introduce function calls (verification)Alexei Starovoitov
Allow arbitrary function calls from bpf function to another bpf function. To recognize such set of bpf functions the verifier does: 1. runs control flow analysis to detect function boundaries 2. proceeds with verification of all functions starting from main(root) function It recognizes that the stack of the caller can be accessed by the callee (if the caller passed a pointer to its stack to the callee) and the callee can store map_value and other pointers into the stack of the caller. 3. keeps track of the stack_depth of each function to make sure that total stack depth is still less than 512 bytes 4. disallows pointers to the callee stack to be stored into the caller stack, since they will be invalid as soon as the callee returns 5. to reuse all of the existing state_pruning logic each function call is considered to be independent call from the verifier point of view. The verifier pretends to inline all function calls it sees are being called. It stores the callsite instruction index as part of the state to make sure that two calls to the same callee from two different places in the caller will be different from state pruning point of view 6. more safety checks are added to liveness analysis Implementation details: . struct bpf_verifier_state is now consists of all stack frames that led to this function . struct bpf_func_state represent one stack frame. It consists of registers in the given frame and its stack . propagate_liveness() logic had a premature optimization where mark_reg_read() and mark_stack_slot_read() were manually inlined with loop iterating over parents for each register or stack slot. Undo this optimization to reuse more complex mark_*_read() logic . skip_callee() logic is not necessary from safety point of view, but without it mark_*_read() markings become too conservative, since after returning from the funciton call a read of r6-r9 will incorrectly propagate the read marks into callee causing inefficient pruning later . mark_*_read() logic is now aware of control flow which makes it more complex. In the future the plan is to rewrite liveness to be hierarchical. So that liveness can be done within basic block only and control flow will be responsible for propagation of liveness information along cfg and between calls. . tail_calls and ld_abs insns are not allowed in the programs with bpf-to-bpf calls . returning stack pointers to the caller or storing them into stack frame of the caller is not allowed Testing: . no difference in cilium processed_insn numbers . large number of tests follows in next patches Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-17bpf: introduce function calls (function boundaries)Alexei Starovoitov
Allow arbitrary function calls from bpf function to another bpf function. Since the beginning of bpf all bpf programs were represented as a single function and program authors were forced to use always_inline for all functions in their C code. That was causing llvm to unnecessary inflate the code size and forcing developers to move code to header files with little code reuse. With a bit of additional complexity teach verifier to recognize arbitrary function calls from one bpf function to another as long as all of functions are presented to the verifier as a single bpf program. New program layout: r6 = r1 // some code .. r1 = .. // arg1 r2 = .. // arg2 call pc+1 // function call pc-relative exit .. = r1 // access arg1 .. = r2 // access arg2 .. call pc+20 // second level of function call ... It allows for better optimized code and finally allows to introduce the core bpf libraries that can be reused in different projects, since programs are no longer limited by single elf file. With function calls bpf can be compiled into multiple .o files. This patch is the first step. It detects programs that contain multiple functions and checks that calls between them are valid. It splits the sequence of bpf instructions (one program) into a set of bpf functions that call each other. Calls to only known functions are allowed. In the future the verifier may allow calls to unresolved functions and will do dynamic linking. This logic supports statically linked bpf functions only. Such function boundary detection could have been done as part of control flow graph building in check_cfg(), but it's cleaner to separate function boundary detection vs control flow checks within a subprogram (function) into logically indepedent steps. Follow up patches may split check_cfg() further, but not check_subprogs(). Only allow bpf-to-bpf calls for root only and for non-hw-offloaded programs. These restrictions can be relaxed in the future. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2017-12-17Add primitives for manipulating bitfields both in host- and fixed-endian.Al Viro
The following primitives are defined in linux/bitfield.h: * u32 le32_get_bits(__le32 val, u32 field) extracts the contents of the bitfield specified by @field in little-endian 32bit object @val and converts it to host-endian. * void le32p_replace_bits(__le32 *p, u32 v, u32 field) replaces the contents of the bitfield specified by @field in little-endian 32bit object pointed to by @p with the value of @v. New value is given in host-endian and stored as little-endian. * __le32 le32_replace_bits(__le32 old, u32 v, u32 field) is equivalent to ({__le32 tmp = old; le32p_replace_bits(&tmp, v, field); tmp;}) In other words, instead of modifying an object in memory, it takes the initial value and returns the modified one. * __le32 le32_encode_bits(u32 v, u32 field) is equivalent to le32_replace_bits(0, v, field). In other words, it returns a little-endian 32bit object with the bitfield specified by @field containing the value of @v and all bits outside that bitfield being zero. Such set of helpers is defined for each of little-, big- and host-endian types; e.g. u64_get_bits(val, field) will return the contents of the bitfield specified by @field in host-endian 64bit object @val, etc. Of course, for host-endian no conversion is involved. Fields to access are specified as GENMASK() values - an N-bit field starting at bit #M is encoded as GENMASK(M + N - 1, M). Note that bit numbers refer to endianness of the object we are working with - e.g. GENMASK(11, 0) in __be16 refers to the second byte and the lower 4 bits of the first byte. In __le16 it would refer to the first byte and the lower 4 bits of the second byte, etc. Field specification must be a constant; __builtin_constant_p() doesn't have to be true for it, but compiler must be able to evaluate it at build time. If it cannot or if the value does not encode any bitfield, the build will fail. If the value being stored in a bitfield is a constant that does not fit into that bitfield, a warning will be generated at compile time. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-12-17locking/barriers: Convert users of lockless_dereference() to READ_ONCE()Will Deacon
[ Note, this is a Git cherry-pick of the following commit: 506458efaf15 ("locking/barriers: Convert users of lockless_dereference() to READ_ONCE()") ... for easier x86 PTI code testing and back-porting. ] READ_ONCE() now has an implicit smp_read_barrier_depends() call, so it can be used instead of lockless_dereference() without any change in semantics. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1508840570-22169-4-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-17locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()Will Deacon
[ Note, this is a Git cherry-pick of the following commit: 76ebbe78f739 ("locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()") ... for easier x86 PTI code testing and back-porting. ] In preparation for the removal of lockless_dereference(), which is the same as READ_ONCE() on all architectures other than Alpha, add an implicit smp_read_barrier_depends() to READ_ONCE() so that it can be used to head dependency chains on all architectures. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1508840570-22169-3-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-17Merge commit 'upstream-x86-virt' into WIP.x86/mmIngo Molnar
Merge a minimal set of virt cleanups, for a base for the MM isolation patches. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-17Merge branch 'upstream-x86-selftests' into WIP.x86/pti.baseIngo Molnar
Conflicts: arch/x86/kernel/cpu/Makefile Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-17Merge commit 'upstream-x86-entry' into WIP.x86/mmIngo Molnar
Pull in a minimal set of v4.15 entry code changes, for a base for the MM isolation patches. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-17drivers/misc/intel/pti: Rename the header file to free up the namespaceIngo Molnar
We'd like to use the 'PTI' acronym for 'Page Table Isolation' - free up the namespace by renaming the <linux/pti.h> driver header to <linux/intel-pti.h>. (Also standardize the header guard name while at it.) Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: J Freyensee <james_p_freyensee@linux.intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-12-16Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Three sets of overlapping changes, two in the packet scheduler and one in the meson-gxl PHY driver. Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-16dm: improve performance by moving dm_io structure to per-bio-dataMike Snitzer
Eliminates need for a separate mempool to allocate 'struct dm_io' objects from. As such, it saves an extra mempool allocation for each original bio that DM core is issued. This complicates the per-bio-data accessor functions by needing to conditonally add extra padding to get to a target's per-bio-data. But in the end this provides a decent performance improvement for all bio-based DM devices. On an NVMe-loop based testbed to a ramdisk (~3100 MB/s): bio-based DM linear performance improved by 2% (went from 2665 to 2777 MB/s). Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-12-16mtd: nand: add ->exec_op() implementationMiquel Raynal
Introduce a new interface to instruct NAND controllers to send specific NAND operations. The new interface takes the form of a single method called ->exec_op(). This method is designed to replace ->cmd_ctrl(), ->cmdfunc() and ->read/write_byte/word/buf() hooks. ->exec_op() is passed a set of instructions describing the operation to execute. Each instruction has a type (ADDR, CMD, DATA, WAITRDY) and delay. The delay is here to help simple controllers wait enough time between each instruction, advanced controllers with integrated timings control can ignore these delays. Controllers that natively support complex operations (operations formed of several instructions) can use the NAND op parser infrastructure. This infrastructure allows controller drivers to describe the sequence of instructions they support (called nand_op_pattern) and a hook for each of these supported sequences. The core then tries to find the best match for a given NAND operation, and calls the associated hook. Various other helpers are also added to ease NAND controller drivers writing. This new interface should ease support of vendor specific operations in that NAND manufacturer drivers now have a way to check if the controller they are connected to supports a specific operation, and complain or refuse to probe the NAND chip when that's not the case. Suggested-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Miquel Raynal <miquel.raynal@free-electrons.com> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
2017-12-16Merge back PM core material for v4.16.Rafael J. Wysocki
2017-12-15drm/ttm: use an ttm operation ctx for ttm_bo_move_xxxRoger He
include ttm_bo_move_memcpy and ttm_bo_move_ttm Signed-off-by: Roger He <Hongbo.He@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-15drm/ttm: add allow_reserved_eviction and resv into ttm_operation_ctxRoger He
allow_reserved_eviction: Allow eviction of reserved BOs resv: Reservation object to allow reserved evictions with Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Roger He <Hongbo.He@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-15x86/gpu: add CFL to early quirksLucas De Marchi
CFL was missing from intel_early_ids[]. The PCI ID needs to be there to allow the memory region to be stolen, otherwise we could have RAM being arbitrarily overwritten if for example we keep using the UEFI framebuffer, depending on how BIOS has set up the e820 map. Fixes: b056f8f3d6b9 ("drm/i915/cfl: Add Coffee Lake PCI IDs for S Skus.") Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Anusha Srivatsa <anusha.srivatsa@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: David Airlie <airlied@linux.ie> Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Cc: <stable@vger.kernel.org> # v4.13+ 0890540e21cf drm/i915: add GT number to intel_device_info Cc: <stable@vger.kernel.org> # v4.13+ 41693fd52373 drm/i915/kbl: Change a KBL pci id to GT2 from GT1.5 Cc: <stable@vger.kernel.org> # v4.13+ Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Acked-by: Jani Nikula <jani.nikula@intel.com> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171213200425.2954-1-lucas.demarchi@intel.com
2017-12-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds
Pull networking fixes from David Miller: 1) Clamp timeouts to INT_MAX in conntrack, from Jay Elliot. 2) Fix broken UAPI for BPF_PROG_TYPE_PERF_EVENT, from Hendrik Brueckner. 3) Fix locking in ieee80211_sta_tear_down_BA_sessions, from Johannes Berg. 4) Add missing barriers to ptr_ring, from Michael S. Tsirkin. 5) Don't advertise gigabit in sh_eth when not available, from Thomas Petazzoni. 6) Check network namespace when delivering to netlink taps, from Kevin Cernekee. 7) Kill a race in raw_sendmsg(), from Mohamed Ghannam. 8) Use correct address in TCP md5 lookups when replying to an incoming segment, from Christoph Paasch. 9) Add schedule points to BPF map alloc/free, from Eric Dumazet. 10) Don't allow silly mtu values to be used in ipv4/ipv6 multicast, also from Eric Dumazet. 11) Fix SKB leak in tipc, from Jon Maloy. 12) Disable MAC learning on OVS ports of mlxsw, from Yuval Mintz. 13) SKB leak fix in skB_complete_tx_timestamp(), from Willem de Bruijn. 14) Add some new qmi_wwan device IDs, from Daniele Palmas. 15) Fix static key imbalance in ingress qdisc, from Jiri Pirko. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (76 commits) net: qcom/emac: Reduce timeout for mdio read/write net: sched: fix static key imbalance in case of ingress/clsact_init error net: sched: fix clsact init error path ip_gre: fix wrong return value of erspan_rcv net: usb: qmi_wwan: add Telit ME910 PID 0x1101 support pkt_sched: Remove TC_RED_OFFLOADED from uapi net: sched: Move to new offload indication in RED net: sched: Add TCA_HW_OFFLOAD net: aquantia: Increment driver version net: aquantia: Fix typo in ethtool statistics names net: aquantia: Update hw counters on hw init net: aquantia: Improve link state and statistics check interval callback net: aquantia: Fill in multicast counter in ndev stats from hardware net: aquantia: Fill ndev stat couters from hardware net: aquantia: Extend stat counters to 64bit values net: aquantia: Fix hardware DMA stream overload on large MRRS net: aquantia: Fix actual speed capabilities reporting sock: free skb in skb_complete_tx_timestamp on error s390/qeth: update takeover IPs after configuration change s390/qeth: lock IP table while applying takeover changes ...
2017-12-15net: phy: broadcom: Add entry for 5395 switch PHYsFlorian Fainelli
Add an entry for the builtin PHYs present in the Broadcom BCM5395 switch. This allows us to retrieve the PHY statistics among other things. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Chris Healy <cphealy@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15usb: core: add support for USB_REQ_SET_ISOCH_DELAYFelipe Balbi
USB SS and SSP hubs provide wHubDelay values on their hub descriptor which we should inform the USB Device about. The USB Specification 3.0 explains, on section 9.4.11, how to calculate the value and how to issue the request. Note that a USB_REQ_SET_ISOCH_DELAY is valid on all device states (Default, Address, Configured), we just *chose* to issue it from Address state right after successfully fetching the USB Device Descriptor. Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-15Merge branch 'locking-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking fixes from Ingo Molnar: "Misc fixes: - Fix a S390 boot hang that was caused by the lock-break logic. Remove lock-break to begin with, as review suggested it was unreasonably fragile and our confidence in its continued good health is lower than our confidence in its removal. - Remove the lockdep cross-release checking code for now, because of unresolved false positive warnings. This should make lockdep work well everywhere again. - Get rid of the final (and single) ACCESS_ONCE() straggler and remove the API from v4.15. - Fix a liblockdep build warning" * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: tools/lib/lockdep: Add missing declaration of 'pr_cont()' checkpatch: Remove ACCESS_ONCE() warning compiler.h: Remove ACCESS_ONCE() tools/include: Remove ACCESS_ONCE() tools/perf: Convert ACCESS_ONCE() to READ_ONCE() locking/lockdep: Remove the cross-release locking checks locking/core: Remove break_lock field when CONFIG_GENERIC_LOCKBREAK=y locking/core: Fix deadlock during boot on systems with GENERIC_LOCKBREAK
2017-12-15sctp: implement handle_ftsn for sctp_stream_interleaveXin Long
handle_ftsn is added as a member of sctp_stream_interleave, used to skip ssn for data or mid for idata, called for SCTP_CMD_PROCESS_FWDTSN cmd. sctp_handle_iftsn works for ifwdtsn, and sctp_handle_fwdtsn works for fwdtsn. Note that different from sctp_handle_fwdtsn, sctp_handle_iftsn could do stream abort pd. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo R. Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15sctp: implement report_ftsn for sctp_stream_interleaveXin Long
report_ftsn is added as a member of sctp_stream_interleave, used to skip tsn from tsnmap, remove old events from reasm or lobby queue, and abort pd for data or idata, called for SCTP_CMD_REPORT_FWDTSN cmd and asoc reset. sctp_report_iftsn works for ifwdtsn, and sctp_report_fwdtsn works for fwdtsn. Note that sctp_report_iftsn doesn't do asoc abort_pd, as stream abort_pd will be done when handling ifwdtsn. But when ftsn is equal with ftsn, which means asoc reset, asoc abort_pd has to be done. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo R. Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15sctp: implement validate_ftsn for sctp_stream_interleaveXin Long
validate_ftsn is added as a member of sctp_stream_interleave, used to validate ssn/chunk type for fwdtsn or mid (message id)/chunk type for ifwdtsn, called in sctp_sf_eat_fwd_tsn, just as validate_data. If this check fails, an abort packet will be sent, as said in section 2.3.1 of RFC8260. As ifwdtsn and fwdtsn chunks have different length, it also defines ftsn_chunk_len for sctp_stream_interleave to describe the chunk size. Then it replaces all sizeof(struct sctp_fwdtsn_chunk) with sctp_ftsnchk_len. It also adds the process for ifwdtsn in rx path. As Marcelo pointed out, there's no need to add event table for ifwdtsn, but just share prsctp_chunk_event_table with fwdtsn's. It would drop fwdtsn chunk for ifwdtsn and drop ifwdtsn chunk for fwdtsn by calling validate_ftsn in sctp_sf_eat_fwd_tsn. After this patch, the ifwdtsn can be accepted. Note that this patch also removes the sctp.intl_enable check for idata chunks in sctp_chunk_event_lookup, as it will do this check in validate_data later. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo R. Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-12-15sctp: implement generate_ftsn for sctp_stream_interleaveXin Long
generate_ftsn is added as a member of sctp_stream_interleave, used to create fwdtsn or ifwdtsn chunk according to abandoned chunks, called in sctp_retransmit and sctp_outq_sack. sctp_generate_iftsn works for ifwdtsn, and sctp_generate_fwdtsn is still used for making fwdtsn. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo R. Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>