summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2013-10-24s390/cio: fix error-prone definesPeter Oberparleiter
Missing parenthesis may cause problems when using the defines together with operations of higher precedence. Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: Remove zfcpdump NR_CPUS dependencyMichael Holzheu
Currently zfpcdump can only collect registers for up to CONFIG_NR_CPUS CPUss. This dependency is not necessary. So remove it by dynamically allocating the save area array. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/ftrace: prepare_ftrace_return() function call orderHeiko Carstens
Steven Rostedt noted that s390 is the only architecture which calls ftrace_push_return_trace() before ftrace_graph_entry() and therefore has the small advantage that trace.depth gets initialized automatically. However this small advantage isn't worth the difference and possible subtle breakage that may result from this. So change s390 to have the same function call order like all other architectures: first ftrace_graph_entry(), then ftrace_push_return_trace() Reported-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/crashdump: remove unused variableHeiko Carstens
Get rid of this compile warning: arch/s390/kernel/crash_dump.c: In function 'copy_from_realmem': arch/s390/kernel/crash_dump.c:48:6: warning: unused variable 'rc' [-Wunused-variable] int rc; ^ Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: use 'unsigned int' instead of 'unsigned long' for atomic_*_mask()Chen Gang
The type of 'v->counter' is always 'int', and related inline assembly code also process 'int', so use 'unsigned int' instead of 'unsigned long' for the 'mask'. Signed-off-by: Chen Gang <gang.chen@asianux.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/gup: handle zero nr_pages case correctlyHeiko Carstens
If [__]get_user_pages_fast() gets called with nr_pages == 0, the current code would walk the page tables and pin as many pages until the first invalid pte (or the kernel crashed while writing struct page pointers to the pages array). So let's handle at least the nr_pages == 0 case correctly and exit early. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/gup: reduce code duplication between [__]get_user_pages_fast functionsHeiko Carstens
Just call __get_user_pages_fast() from get_user_pages_fast() like powerpc. This saves a lot of duplicated code. Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/mm: do not initialize storage keysMartin Schwidefsky
With dirty and referenced bits implemented in software it is unnecessary to initialize the storage key for every page. With this patch not a single storage key operation is done for a system that does not use KVM. For KVM set_pte_at/pgste_set_key will do the initialization for the guest view of the storage key when the mapping for the page is established in the host. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bpf,jit: fix prolog oddityMartin Schwidefsky
The prolog of functions generated by the bpf jit compiler uses an instruction sequence with an "ahi" instruction to create stack space instead of using an "aghi" instruction. Using the 32-bit "ahi" is not wrong as the stack we are operating on is an order-4 allocation which is always aligned to 16KB. But it is more consistent to use an "aghi" as the stack pointer is a 64-bit value. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: cleanup and add sanity checks to control register macrosHeiko Carstens
- turn some macros into functions - merge two almost identical versions for 32/64 bit - add BUILD_BUG_ON() check to make sure the passed in array is large enough Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/eadm_sch: improve quiesce handlingSebastian Ott
When quiescing an eadm subchannel make sure that outstanding IO is cleared and potential timeout handlers are canceled. Reviewed-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com> Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/pci: implement hibernation hooksSebastian Ott
Implement architecture-specific functionality when a PCI device is doing a hibernate transition. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/uaccess: always run the kernel in home spaceMartin Schwidefsky
Simplify the uaccess code by removing the user_mode=home option. The kernel will now always run in the home space mode. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: rename find_first_bit_left() to find_first_bit_inv()Heiko Carstens
find_first_bit_left() and friends have nothing to do with the normal LSB0 bit numbering for big endian machines used in Linux (least significant bit has bit number 0). Instead they use MSB0 bit numbering, where the most signficant bit has bit number 0. So rename find_first_bit_left() and friends to find_first_bit_inv(), to avoid any confusion. Also provide inv versions of set_bit, clear_bit and test_bit. This also removes the confusing use of e.g. set_bit() in airq.c which uses a "be_to_le" bit number conversion, which could imply that instead set_bit_le() could be used. But that is entirely wrong since the _le bitops variant uses yet another bit numbering scheme. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: use flogr instruction to implement __ffs, ffs, __fls, fls and fls64Heiko Carstens
Since z9 109 we have the flogr instruction which can be used to implement optimized versions of __ffs, ffs, __fls, fls and fls64. So implement and use them, instead of the generic variants. This reduces the size of the kernel image (defconfig, -march=z9-109) by 19,648 bytes. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: use generic find bit functions / reimplement _left variantHeiko Carstens
Just like all other architectures we should use out-of-line find bit operations, since the inline variant bloat the size of the kernel image. And also like all other architecures we should only supply optimized variants of the __ffs, ffs, etc. primitives. Therefore this patch removes the inlined s390 find bit functions and uses the generic out-of-line variants instead. The optimization of the primitives follows with the next patch. With this patch also the functions find_first_bit_left() and find_next_bit_left() have been reimplemented, since logically, they are nothing else but a find_first_bit()/find_next_bit() implementation that use an inverted __fls() instead of __ffs(). Also the restriction that these functions only work on machines which support the "flogr" instruction is gone now. This reduces the size of the kernel image (defconfig, -march=z9-109) by 144,482 bytes. Alone the size of the function build_sched_domains() gets reduced from 7 KB to 3,5 KB. We also git rid of unused functions like find_first_bit_le()... Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/s390dbf: use debug_level_enabled() where applicableHendrik Brueckner
Refactor direct debug level comparisons with the (internal) s390db->level member. Use the debug_level_enabled() function instead. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/s390dbf: add debug_level_enabled() functionHendrik Brueckner
Add the debug_level_enabled() function to check if debug events for a particular level would be logged. This might help to save cycles for debug events that require additional information collection. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: optimize set_bit() for constant valuesHeiko Carstens
Since zEC12 we have the interlocked-access facility 2 which allows to use the instructions ni/oi/xi to update a single byte in storage with compare-and-swap semantics. So change set_bit(), clear_bit() and change_bit() to generate such code instead of a compare-and-swap loop (or using the load-and-* instruction family), if possible. This reduces the text segment by yet another 8KB (defconfig). Alternatively the long displacement variants niy/oiy/xiy could have been used, but the extended displacement field is usually not needed and therefore would only increase the size of the text segment again. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: remove CONFIG_SMP / simplify non-atomic bitopsHeiko Carstens
Remove CONFIG_SMP from bitops code. This reduces the C code significantly but also generates better code for the SMP case. This means that for !CONFIG_SMP set_bit() and friends now also have compare and swap semantics (read: more code). However nobody really cares for !CONFIG_SMP and this is the trade-off to simplify the SMP code which we do care about. The non-atomic bitops like __set_bit() now generate also better code because the old code did not have a __builtin_contant_p() check for the CONFIG_SMP case and therefore always generated the inline assembly variant. However the inline assemblies for the non-atomic case now got completely removed since gcc can produce better code, which accesses less memory operands. test_bit() got also a bit simplified since it did have a __builtin_constant_p() check, however two identical code pathes for each case (written differently). In result this mainly reduces the to be maintained code but is not very relevant for code generation, since there are not many non-atomic bitops usages that we care about. (code reduction defconfig kernel image before/after: 560 bytes). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: various small cleanupsHeiko Carstens
- add a typecheck to the defines to make sure they operate on an atomic_t - simplify inline assembly constraints - keep variable names common between functions Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: optimize atomic_add() for constant valuesHeiko Carstens
If the interlocked-access facility 1 is available we can use the asi and agsi instructions for interlocked updates if the to be added value is a contanst and small (in the range of -128..127). asi and agsi do not not return the old or new value, therefore these instructions can only be used for atomic_(add|sub|inc|dec)[64]. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/qdio: fix atomic_sub() misusageHeiko Carstens
get_inbound_buffer_frontier() makes use of the return value of atomic_sub() which shouldn't work, since atomic_sub() is supposed to return void. This only works on s390 because atomic_sub() gets mapped to atomic_sub_return() with a define without changing it's return value to void. So use atomic_sub_return() instead of atomic_sub() in qeth code before fixing atomic ops. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/kprobes: allow kprobes only on known instructionsHeiko Carstens
Since we have an in-kernel disassembler we can make sure that there won't be any kprobes set on random data. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/kprobes: use insn_length helper functionHeiko Carstens
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/dis: move disassembler function prototypes to proper header fileHeiko Carstens
Now that the in-kernel disassembler has an own header file move the disassembler related function prototypes to that header file. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/dis: move common definitions to a header fileSuzuki K. Poulose
The patch moves some of the definitions to a header file. No functional changes involved. I have retained the Copyright Statement from the original file. Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com> [Heiko Carstens: rename s390-dis.h to dis.h] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/dis: rename structures for unique typesSuzuki K. Poulose
Rename 'insn' and 'operand' structures to more canonical names to avoid conflicts. struct insn represents information about an instruction, including the mnemonics, format and opcode. struct operand represents the 'properties' and information on howto interpret the operand value and doesn't contain the value. We rename these structures for avoiding a global conflict. i.e, 1,$s/struct insn/struct s390_insn/g 1,$s/struct operand/struct s390_operand/g Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: make use of interlocked-access facility 1 instructionsHeiko Carstens
Same as for bitops: make use of the interlocked-access facility 1 instructions which allow to atomically update storage locations without a compare-and-swap loop. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/atomic: implement atomic_sub_return() with atomic_add_return()Heiko Carstens
Get rid of the own atomic_sub_return() implementation. Otherwise we can't make use of the interlocked-access facility 1 instructions for atomic_sub_return(), since there is no "load and subtract" instruction available. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/kprobes: have more correct if statement in s390_get_insn_slot()Heiko Carstens
When checking the insn address wether it is a kernel image or module address it should be an if-else-if statement not two independent if statements. This doesn't really fix a bug, but matches s390_free_insn_slot(). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390: always set -march compiler optionHeiko Carstens
Currently we only set the -march compiler option if the kbuild system figured out that the compiler actually supports the selected architecture (cc-option test). In result this means that no -march compiler option is set when an unsupported cpu architecture of the current compiler is selected. The kernel compile will afterwards succeed but with the default architecture instead of the (unsupported) selected one. Change this behaviour, so compiles will fail if the compiler does not support the selected cpu architecture. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: make use of interlocked-access facility 1 instructionsHeiko Carstens
Make use of the interlocked-access facility 1 that got added with the z196 architecure. This facilility added new instructions which can atomically update a storage location without a compare-and-swap loop. E.g. setting a bit within a "long" can be done with a single instruction. The size of the kernel image gets ~30kb smaller. Considering that there are appr. 1900 bitops call sites this means that each one saves about 15-16 bytes per call site which is expected. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24Docs: arm64: booting: clarify boot requirementsMark Rutland
There are a few points in the arm64 booting document which are unclear (such as the initial state of secondary CPUs), and/or have not been documented (PSCI is a supported mechanism for booting secondary CPUs). This patch amends the arm64 boot document to better express the (existing) requirements, and to describe PSCI as a supported booting mechanism. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Fu Wei <tekkamanninja@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24arm64: Export __copy_in_user() to modulesCatalin Marinas
This function may be called from loadable modules, so it needs exporting. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Loc Ho <lho@apm.com>
2013-10-24arm64: cmpxchg: implement cmpxchg64_relaxedWill Deacon
This patch introduces cmpxchg64_relaxed for arm64 using the existing cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits) without barrier semantics. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24arm64: lockref: add support for lockless lockrefs using cmpxchgWill Deacon
Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can deal with 8-bytes (as one would hope!). This patch wires up the cmpxchg-based lockless lockref implementation for arm64. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24arm64: locks: introduce ticket-based spinlock implementationWill Deacon
This patch introduces a ticket lock implementation for arm64, along the same lines as the implementation for arch/arm/. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24arm64: Fix memory layout typoCatalin Marinas
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-24iio: light: vcnl4000: Remove redundant codeSachin Kamat
The if check is redundant as the value obtained from iio_device_register() is already in the required format. Hence return the function directly. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24iio: dac: mcp4725: Remove redundant codeSachin Kamat
Remove an inconsequential print message and return directly thereby cleaning up some code. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24iio: dac: max517: Remove redundant variableSachin Kamat
Remove an inconsequential print message and return directly thereby eliminating an intermediate variable. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24iio: dac: ad5755: Remove redundant codeSachin Kamat
The if check is redundant as the value obtained from iio_device_register() is already in the required format. Error messages are already printed by iio_device_register(); hence not needed. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24iio: dac: ad5421: Remove redundant codeSachin Kamat
The if check is redundant as the value obtained from iio_device_register() is already in the required format. Hence return the function directly. Error messages are already printed by iio_device_register(); hence not needed. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24ALSA: hda/realtek - Raise the delay for alc283_shutupKailang Yang
Some machine with 85ms delay might be happen pop noise when codec enter to D3. Raise up to 100ms delay will be match for more machine. Signed-off-by: Kailang Yang <kailang@realtek.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-10-24iio: adc: twl6030-gpadc: Remove redundant codeSachin Kamat
The if check is redundant as the value obtained from iio_device_register() is already in the required format. Hence return the function directly. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24iio: accel: kxsd9: Remove redundant variableSachin Kamat
Return directly thereby eliminating an intermediate variable. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24iio: core: Add misssing bracesSachin Kamat
Silences the following checkpatch warning: WARNING: sizeof *iio_attr should be sizeof(*iio_attr) Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24iio: core: Use pr_err instead of printkSachin Kamat
Use of pr_err is preferred to printk. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2013-10-24perf script python: Fix mem leak due to missing Py_DECREFs on dict entriesJoseph Schuchart
We are using the Python scripting interface in perf to extract kernel events relevant for performance analysis of HPC codes. We noticed that the "perf script" call allocates a significant amount of memory (in the order of several 100 MiB) during it's run, e.g. 125 MiB for a 25 MiB input file: $> perf record -o perf.data -a -R -g fp \ -e power:cpu_frequency -e sched:sched_switch \ -e sched:sched_migrate_task -e sched:sched_process_exit \ -e sched:sched_process_fork -e sched:sched_process_exec \ -e cycles -m 4096 --freq 4000 $> /usr/bin/time perf script -i perf.data -s dummy_script.py 0.84user 0.13system 0:01.92elapsed 51%CPU (0avgtext+0avgdata 125532maxresident)k 73072inputs+0outputs (57major+33086minor)pagefaults 0swaps Upon further investigation using the valgrind massif tool, we noticed that Python objects that are created in trace-event-python.c via PyString_FromString*() (and their Integer and Long counterparts) are never free'd. The reason for this seem to be missing Py_DECREF calls on the objects that are returned by these functions and stored in the Python dictionaries. The Python dictionaries do not steal references (as opposed to Python tuples and lists) but instead add their own reference. Hence, the reference that is returned by these object creation functions is never released and the memory is leaked. (see [1,2]) The attached patch fixes this by wrapping all relevant calls to PyDict_SetItemString() and decrementing the reference counter immediately after the Python function call. This reduces the allocated memory to a reasonable amount: $> /usr/bin/time perf script -i perf.data -s dummy_script.py 0.73user 0.05system 0:00.79elapsed 99%CPU (0avgtext+0avgdata 49132maxresident)k 0inputs+0outputs (0major+14045minor)pagefaults 0swaps For comparison, with a 120 MiB input file the memory consumption reported by time drops from almost 600 MiB to 146 MiB. The patch has been tested using Linux 3.8.2 with Python 2.7.4 and Linux 3.11.6 with Python 2.7.5. Please let me know if you need any further information. [1] http://docs.python.org/2/c-api/tuple.html#PyTuple_SetItem [2] http://docs.python.org/2/c-api/dict.html#PyDict_SetItemString Signed-off-by: Joseph Schuchart <joseph.schuchart@tu-dresden.de> Reviewed-by: Tom Zanussi <tom.zanussi@linux.intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tom.zanussi@linux.intel.com> Link: http://lkml.kernel.org/r/1381468543-25334-4-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>