Age | Commit message (Collapse) | Author |
|
Remove CONFIG_SMP from bitops code. This reduces the C code significantly
but also generates better code for the SMP case.
This means that for !CONFIG_SMP set_bit() and friends now also have
compare and swap semantics (read: more code). However nobody really cares
for !CONFIG_SMP and this is the trade-off to simplify the SMP code which we
do care about.
The non-atomic bitops like __set_bit() now generate also better code
because the old code did not have a __builtin_contant_p() check for the
CONFIG_SMP case and therefore always generated the inline assembly variant.
However the inline assemblies for the non-atomic case now got completely
removed since gcc can produce better code, which accesses less memory
operands.
test_bit() got also a bit simplified since it did have a
__builtin_constant_p() check, however two identical code pathes for each
case (written differently).
In result this mainly reduces the to be maintained code but is not very
relevant for code generation, since there are not many non-atomic bitops
usages that we care about.
(code reduction defconfig kernel image before/after: 560 bytes).
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
- add a typecheck to the defines to make sure they operate on an atomic_t
- simplify inline assembly constraints
- keep variable names common between functions
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
If the interlocked-access facility 1 is available we can use the asi
and agsi instructions for interlocked updates if the to be added
value is a contanst and small (in the range of -128..127).
asi and agsi do not not return the old or new value, therefore these
instructions can only be used for atomic_(add|sub|inc|dec)[64].
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
get_inbound_buffer_frontier() makes use of the return value of atomic_sub()
which shouldn't work, since atomic_sub() is supposed to return void.
This only works on s390 because atomic_sub() gets mapped to atomic_sub_return()
with a define without changing it's return value to void.
So use atomic_sub_return() instead of atomic_sub() in qeth code before fixing
atomic ops.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Since we have an in-kernel disassembler we can make sure that
there won't be any kprobes set on random data.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Now that the in-kernel disassembler has an own header file move the
disassembler related function prototypes to that header file.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The patch moves some of the definitions to a
header file. No functional changes involved.
I have retained the Copyright Statement from the
original file.
Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com>
[Heiko Carstens: rename s390-dis.h to dis.h]
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Rename 'insn' and 'operand' structures to more canonical names
to avoid conflicts.
struct insn represents information about an instruction, including
the mnemonics, format and opcode.
struct operand represents the 'properties' and information on howto
interpret the operand value and doesn't contain the value.
We rename these structures for avoiding a global conflict.
i.e,
1,$s/struct insn/struct s390_insn/g
1,$s/struct operand/struct s390_operand/g
Signed-off-by: Suzuki K Poulose <suzuki@in.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Same as for bitops: make use of the interlocked-access facility 1
instructions which allow to atomically update storage locations
without a compare-and-swap loop.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Get rid of the own atomic_sub_return() implementation. Otherwise we can't
make use of the interlocked-access facility 1 instructions for
atomic_sub_return(), since there is no "load and subtract" instruction
available.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
When checking the insn address wether it is a kernel image or module
address it should be an if-else-if statement not two independent if
statements.
This doesn't really fix a bug, but matches s390_free_insn_slot().
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Currently we only set the -march compiler option if the kbuild
system figured out that the compiler actually supports the selected
architecture (cc-option test).
In result this means that no -march compiler option is set when an
unsupported cpu architecture of the current compiler is selected.
The kernel compile will afterwards succeed but with the default
architecture instead of the (unsupported) selected one.
Change this behaviour, so compiles will fail if the compiler does
not support the selected cpu architecture.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
Make use of the interlocked-access facility 1 that got added with the
z196 architecure.
This facilility added new instructions which can atomically update a
storage location without a compare-and-swap loop. E.g. setting a bit
within a "long" can be done with a single instruction.
The size of the kernel image gets ~30kb smaller. Considering that there
are appr. 1900 bitops call sites this means that each one saves about
15-16 bytes per call site which is expected.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
There are a few points in the arm64 booting document which are unclear
(such as the initial state of secondary CPUs), and/or have not been
documented (PSCI is a supported mechanism for booting secondary CPUs).
This patch amends the arm64 boot document to better express the
(existing) requirements, and to describe PSCI as a supported booting
mechanism.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Fu Wei <tekkamanninja@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This function may be called from loadable modules, so it needs
exporting.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Loc Ho <lho@apm.com>
|
|
This patch introduces cmpxchg64_relaxed for arm64 using the existing
cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits)
without barrier semantics.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can
deal with 8-bytes (as one would hope!).
This patch wires up the cmpxchg-based lockless lockref implementation
for arm64.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This patch introduces a ticket lock implementation for arm64, along the
same lines as the implementation for arch/arm/.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
The if check is redundant as the value obtained from
iio_device_register() is already in the required format.
Hence return the function directly.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
Remove an inconsequential print message and return directly
thereby cleaning up some code.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
Remove an inconsequential print message and return directly
thereby eliminating an intermediate variable.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
The if check is redundant as the value obtained from
iio_device_register() is already in the required format.
Error messages are already printed by iio_device_register();
hence not needed.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
The if check is redundant as the value obtained from
iio_device_register() is already in the required format.
Hence return the function directly. Error messages are already
printed by iio_device_register(); hence not needed.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
The if check is redundant as the value obtained from
iio_device_register() is already in the required format.
Hence return the function directly.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
Return directly thereby eliminating an intermediate variable.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
Silences the following checkpatch warning:
WARNING: sizeof *iio_attr should be sizeof(*iio_attr)
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
Use of pr_err is preferred to printk.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
We are using the Python scripting interface in perf to extract kernel
events relevant for performance analysis of HPC codes. We noticed that
the "perf script" call allocates a significant amount of memory (in the
order of several 100 MiB) during it's run, e.g. 125 MiB for a 25 MiB
input file:
$> perf record -o perf.data -a -R -g fp \
-e power:cpu_frequency -e sched:sched_switch \
-e sched:sched_migrate_task -e sched:sched_process_exit \
-e sched:sched_process_fork -e sched:sched_process_exec \
-e cycles -m 4096 --freq 4000
$> /usr/bin/time perf script -i perf.data -s dummy_script.py
0.84user 0.13system 0:01.92elapsed 51%CPU (0avgtext+0avgdata
125532maxresident)k
73072inputs+0outputs (57major+33086minor)pagefaults 0swaps
Upon further investigation using the valgrind massif tool, we noticed
that Python objects that are created in trace-event-python.c via
PyString_FromString*() (and their Integer and Long counterparts) are
never free'd.
The reason for this seem to be missing Py_DECREF calls on the objects
that are returned by these functions and stored in the Python
dictionaries. The Python dictionaries do not steal references (as
opposed to Python tuples and lists) but instead add their own reference.
Hence, the reference that is returned by these object creation functions
is never released and the memory is leaked. (see [1,2])
The attached patch fixes this by wrapping all relevant calls to
PyDict_SetItemString() and decrementing the reference counter
immediately after the Python function call.
This reduces the allocated memory to a reasonable amount:
$> /usr/bin/time perf script -i perf.data -s dummy_script.py
0.73user 0.05system 0:00.79elapsed 99%CPU (0avgtext+0avgdata
49132maxresident)k
0inputs+0outputs (0major+14045minor)pagefaults 0swaps
For comparison, with a 120 MiB input file the memory consumption
reported by time drops from almost 600 MiB to 146 MiB.
The patch has been tested using Linux 3.8.2 with Python 2.7.4 and Linux
3.11.6 with Python 2.7.5.
Please let me know if you need any further information.
[1] http://docs.python.org/2/c-api/tuple.html#PyTuple_SetItem
[2] http://docs.python.org/2/c-api/dict.html#PyDict_SetItemString
Signed-off-by: Joseph Schuchart <joseph.schuchart@tu-dresden.de>
Reviewed-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
Link: http://lkml.kernel.org/r/1381468543-25334-4-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
This patch adds an explicit check + failure for XCOPY I/O to source +
destination devices with a non-matching block_size.
This limitiation is currently due to the fact that the scatterlist
memory allocated for the XCOPY READ operation is passed zero-copy
to the XCOPY WRITE operation.
Reported-by: Thomas Glanzmann <thomas@glanzmann.de>
Reported-by: Douglas Gilbert <dgilbert@interlog.com>
Cc: Thomas Glanzmann <thomas@glanzmann.de>
Cc: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
|
This patch adds the missing non-zero se_cmd->scsi_status check required
for local XCOPY I/O within target_xcopy_issue_pt_cmd() to signal an
exception case failure.
This will trigger the generation of SAM_STAT_CHECK_CONDITION status
from within target_xcopy_do_work() process context code.
Reported-by: Thomas Glanzmann <thomas@glanzmann.de>
Reported-by: Douglas Gilbert <dgilbert@interlog.com>
Cc: Thomas Glanzmann <thomas@glanzmann.de>
Cc: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
|
This patch adds the missing xcopy_pt_cmd->sense_buffer[] required for
correctly handling CHECK_CONDITION exceptions within the locally
generated XCOPY I/O path.
Also update target_xcopy_read_source() + target_xcopy_setup_pt_cmd()
to pass this buffer into transport_init_se_cmd() to correctly setup
se_cmd->sense_buffer.
Reported-by: Thomas Glanzmann <thomas@glanzmann.de>
Reported-by: Douglas Gilbert <dgilbert@interlog.com>
Cc: Thomas Glanzmann <thomas@glanzmann.de>
Cc: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
|
When a machine goes to S3/S4 after power-save is enabled, the runtime
PM refcount might be incorrectly decreased because the power-down
triggered soon after resume assumes that the controller was already
powered up, and issues the pm_notify down.
This patch fixes the incorrect pm_notify call simply by checking the
current value properly.
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Pull md bugfixes from Neil Brown:
"Assorted md bug-fixes for 3.12.
All tagged for -stable releases too"
* tag 'md/3.12-fixes' of git://neil.brown.name/md:
raid5: avoid finding "discard" stripe
raid5: set bio bi_vcnt 0 for discard request
md: avoid deadlock when md_set_badblocks.
md: Fix skipping recovery for read-only arrays.
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"This is a set of two fixes which cause oopses (Buslogic, qla2xxx) and
one fix which may cause a hang because of request miscounting (sd)"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
[SCSI] sd: call blk_pm_runtime_init before add_disk
[SCSI] qla2xxx: Fix request queue null dereference.
[SCSI] BusLogic: Fix an oops when intializing multimaster adapter
|
|
This patch changes isert_connect_release() to correctly check for
the existence struct isert_device *device before checking for
isert_device->use_frwr.
Signed-off-by: Vu Pham <vu@mellanox.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
|
SCSI discard will damage discard stripe bio setting, eg, some fields are
changed. If the stripe is reused very soon, we have wrong bios setting. We
remove discard stripe from hash list, so next time the strip will be fully
initialized.
Suitable for backport to 3.7+.
Cc: <stable@vger.kernel.org> (3.7+)
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
|
|
SCSI layer will add new payload for discard request. If two bios are merged
to one, the second bio has bi_vcnt 1 which is set in raid5. This will confuse
SCSI and cause oops.
Suitable for backport to 3.7+
Cc: stable@vger.kernel.org (v3.7+)
Reported-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
When operate harddisk and hit errors, md_set_badblocks is called after
scsi_restart_operations which already disabled the irq. but md_set_badblocks
will call write_sequnlock_irq and enable irq. so softirq can preempt the
current thread and that may cause a deadlock. I think this situation should
use write_sequnlock_irqsave/irqrestore instead.
I met the situation and the call trace is below:
[ 638.919974] BUG: spinlock recursion on CPU#0, scsi_eh_13/1010
[ 638.921923] lock: 0xffff8800d4d51fc8, .magic: dead4ead, .owner: scsi_eh_13/1010, .owner_cpu: 0
[ 638.923890] CPU: 0 PID: 1010 Comm: scsi_eh_13 Not tainted 3.12.0-rc5+ #37
[ 638.925844] Hardware name: To be filled by O.E.M. To be filled by O.E.M./MAHOBAY, BIOS 4.6.5 03/05/2013
[ 638.927816] ffff880037ad4640 ffff880118c03d50 ffffffff8172ff85 0000000000000007
[ 638.929829] ffff8800d4d51fc8 ffff880118c03d70 ffffffff81730030 ffff8800d4d51fc8
[ 638.931848] ffffffff81a72eb0 ffff880118c03d90 ffffffff81730056 ffff8800d4d51fc8
[ 638.933884] Call Trace:
[ 638.935867] <IRQ> [<ffffffff8172ff85>] dump_stack+0x55/0x76
[ 638.937878] [<ffffffff81730030>] spin_dump+0x8a/0x8f
[ 638.939861] [<ffffffff81730056>] spin_bug+0x21/0x26
[ 638.941836] [<ffffffff81336de4>] do_raw_spin_lock+0xa4/0xc0
[ 638.943801] [<ffffffff8173f036>] _raw_spin_lock+0x66/0x80
[ 638.945747] [<ffffffff814a73ed>] ? scsi_device_unbusy+0x9d/0xd0
[ 638.947672] [<ffffffff8173fb1b>] ? _raw_spin_unlock+0x2b/0x50
[ 638.949595] [<ffffffff814a73ed>] scsi_device_unbusy+0x9d/0xd0
[ 638.951504] [<ffffffff8149ec47>] scsi_finish_command+0x37/0xe0
[ 638.953388] [<ffffffff814a75e8>] scsi_softirq_done+0xa8/0x140
[ 638.955248] [<ffffffff8130e32b>] blk_done_softirq+0x7b/0x90
[ 638.957116] [<ffffffff8104fddd>] __do_softirq+0xfd/0x330
[ 638.958987] [<ffffffff810b964f>] ? __lock_release+0x6f/0x100
[ 638.960861] [<ffffffff8174a5cc>] call_softirq+0x1c/0x30
[ 638.962724] [<ffffffff81004c7d>] do_softirq+0x8d/0xc0
[ 638.964565] [<ffffffff8105024e>] irq_exit+0x10e/0x150
[ 638.966390] [<ffffffff8174ad4a>] smp_apic_timer_interrupt+0x4a/0x60
[ 638.968223] [<ffffffff817499af>] apic_timer_interrupt+0x6f/0x80
[ 638.970079] <EOI> [<ffffffff810b964f>] ? __lock_release+0x6f/0x100
[ 638.971899] [<ffffffff8173fa6a>] ? _raw_spin_unlock_irq+0x3a/0x50
[ 638.973691] [<ffffffff8173fa60>] ? _raw_spin_unlock_irq+0x30/0x50
[ 638.975475] [<ffffffff81562393>] md_set_badblocks+0x1f3/0x4a0
[ 638.977243] [<ffffffff81566e07>] rdev_set_badblocks+0x27/0x80
[ 638.978988] [<ffffffffa00d97bb>] raid5_end_read_request+0x36b/0x4e0 [raid456]
[ 638.980723] [<ffffffff811b5a1d>] bio_endio+0x1d/0x40
[ 638.982463] [<ffffffff81304ff3>] req_bio_endio.isra.65+0x83/0xa0
[ 638.984214] [<ffffffff81306b9f>] blk_update_request+0x7f/0x350
[ 638.985967] [<ffffffff81306ea1>] blk_update_bidi_request+0x31/0x90
[ 638.987710] [<ffffffff813085e0>] __blk_end_bidi_request+0x20/0x50
[ 638.989439] [<ffffffff8130862f>] __blk_end_request_all+0x1f/0x30
[ 638.991149] [<ffffffff81308746>] blk_peek_request+0x106/0x250
[ 638.992861] [<ffffffff814a62a9>] ? scsi_kill_request.isra.32+0xe9/0x130
[ 638.994561] [<ffffffff814a633a>] scsi_request_fn+0x4a/0x3d0
[ 638.996251] [<ffffffff813040a7>] __blk_run_queue+0x37/0x50
[ 638.997900] [<ffffffff813045af>] blk_run_queue+0x2f/0x50
[ 638.999553] [<ffffffff814a5750>] scsi_run_queue+0xe0/0x1c0
[ 639.001185] [<ffffffff814a7721>] scsi_run_host_queues+0x21/0x40
[ 639.002798] [<ffffffff814a2e87>] scsi_restart_operations+0x177/0x200
[ 639.004391] [<ffffffff814a4fe9>] scsi_error_handler+0xc9/0xe0
[ 639.005996] [<ffffffff814a4f20>] ? scsi_unjam_host+0xd0/0xd0
[ 639.007600] [<ffffffff81072f6b>] kthread+0xdb/0xe0
[ 639.009205] [<ffffffff81072e90>] ? flush_kthread_worker+0x170/0x170
[ 639.010821] [<ffffffff81748cac>] ret_from_fork+0x7c/0xb0
[ 639.012437] [<ffffffff81072e90>] ? flush_kthread_worker+0x170/0x170
This bug was introduce in commit 2e8ac30312973dd20e68073653
(the first time rdev_set_badblock was call from interrupt context),
so this patch is appropriate for 3.5 and subsequent kernels.
Cc: <stable@vger.kernel.org> (3.5+)
Signed-off-by: Bian Yu <bianyu@kedacom.com>
Reviewed-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
|
|
Since:
commit 7ceb17e87bde79d285a8b988cfed9eaeebe60b86
md: Allow devices to be re-added to a read-only array.
spares are activated on a read-only array. In case of raid1 and raid10
personalities it causes that not-in-sync devices are marked in-sync
without checking if recovery has been finished.
If a read-only array is degraded and one of its devices is not in-sync
(because the array has been only partially recovered) recovery will be skipped.
This patch adds checking if recovery has been finished before marking a device
in-sync for raid1 and raid10 personalities. In case of raid5 personality
such condition is already present (at raid5.c:6029).
Bug was introduced in 3.10 and causes data corruption.
Cc: stable@vger.kernel.org
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Lukasz Dorau <lukasz.dorau@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
|
|
Commit 8a07eb0a50 ("sctp: Add ASCONF operation on the single-homed host")
implemented possible use of IPv4 addresses with non SCTP_ADDR_SRC state
as source address when sending ASCONF (ADD) packets, but IPv6 part for
that was not implemented in 8a07eb0a50. Therefore, as this is not restricted
to IPv4-only, fix this up to allow the same for IPv6 addresses in SCTP.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Michio Honda <micchie@sfc.wide.ad.jp>
Acked-by: Michio Honda <micchie@sfc.wide.ad.jp>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pablo Neira Ayuso says:
====================
The following patchset contains three netfilter fixes for your net
tree, they are:
* A couple of fixes to resolve info leak to userspace due to uninitialized
memory area in ulogd, from Mathias Krause.
* Fix instruction ordering issues that may lead to the access of
uninitialized data in x_tables. The problem involves the table update
(producer) and the main packet matching (consumer) routines. Detected in
SMP ARMv7, from Will Deacon.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We use u16 for voltage values throughout the driver so switch
the table values to a u16 as well. Fixes an incompatible
cast error in ci_patch_clock_voltage_limits_with_vddc_leakage()
picked up by coverity.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
May cause stability problems on some boards.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
Use the driver calculated CTS and N values rather than
having hardware generate them. This allows us to use
the modeline pixel clock rather than the actual pll clock
when setting up the dto for audio. Fixes problems with
audio playback rate on certain asics if the pll clock
does not match the pixel clock exactly.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
Signed-off-by: Denis Ciocca <denis.ciocca@st.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
ti_adc_dt_ids is always compiled in. Hence of_match_ptr is not
needed.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
nau7802_dt_ids is always compiled in. Hence of_match_ptr is not
needed.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|
|
of_twl6030_match_tbl is always compiled in. Hence of_match_ptr is
not necessary.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
|