Age | Commit message (Collapse) | Author |
|
As the first step towards allowing quiescent-state forcing to be
preemptible, this commit moves RCU quiescent-state forcing into the
same kthread that is now used to initialize and clean up after grace
periods. This is yet another step towards keeping scheduling
latency down to a dull roar.
Updated to change from raw_spin_lock_irqsave() to raw_spin_lock_irq()
and to remove the now-unused rcu_state structure fields as suggested by
Peter Zijlstra.
Reported-by: Mike Galbraith <mgalbraith@suse.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
The fields in the rcu_state structure that are protected by the
root rcu_node structure's ->lock can share a cache line with the
fields protected by ->onofflock. This can result in excessive
memory contention on large systems, so this commit applies
____cacheline_internodealigned_in_smp to the ->onofflock field in
order to segregate them.
Signed-off-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Dimitri Sivanich <sivanich@sgi.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
In kernels built with CONFIG_RCU_FAST_NO_HZ=y, CPUs can accumulate a
large number of lazy callbacks, which as the name implies will be slow
to be invoked. This can be a problem on small-memory systems, where the
default 6-second sleep for CPUs having only lazy RCU callbacks could well
be fatal. This commit therefore installs an OOM hander that ensures that
every CPU with lazy callbacks has at least one non-lazy callback, in turn
ensuring timely advancement for these callbacks.
Updated to fix bug that disabled OOM killing, noted by Lai Jiangshan.
Updated to push the for_each_rcu_flavor() loop into rcu_oom_notify_cpu(),
thus reducing the number of IPIs, as suggested by Steven Rostedt. Also
to make the for_each_online_cpu() loop be preemptible. (Later, it might
be good to use smp_call_function(), as suggested by Peter Zijlstra.)
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Sasha Levin <levinsasha928@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
Earlier versions of RCU invoked the RCU core from the CPU_DYING notifier
in order to note a quiescent state for the outgoing CPU. Because the
CPU is marked "offline" during the execution of the CPU_DYING notifiers,
the RCU core had to tolerate being invoked from an offline CPU. However,
commit b1420f1c (Make rcu_barrier() less disruptive) left only tracing
code in the CPU_DYING notifier, so the RCU core need no longer execute
on offline CPUs. This commit therefore enforces this restriction.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
Then rcu_gp_kthread() function is too large and furthermore needs to
have the force_quiescent_state() code pulled in. This commit therefore
breaks up rcu_gp_kthread() into rcu_gp_init() and rcu_gp_cleanup().
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
RCU grace-period cleanup is currently carried out with interrupts
disabled, which can result in excessive latency spikes on large systems
(many hundreds or thousands of CPUs). This patch therefore makes the
RCU grace-period cleanup be preemptible, including voluntary preemption
points, which should eliminate those latency spikes. Similar spikes from
forcing of quiescent states will be dealt with similarly by later patches.
Updated to replace uses of spin_lock_irqsave() with spin_lock_irq(), as
suggested by Peter Zijlstra.
Reported-by: Mike Galbraith <mgalbraith@suse.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
As a first step towards allowing grace-period cleanup to be preemptible,
this commit moves the RCU grace-period cleanup into the same kthread
that is now used to initialize grace periods. This is needed to keep
scheduling latency down to a dull roar.
[ paulmck: Get rid of stray spin_lock_irqsave() calls. ]
Reported-by: Mike Galbraith <mgalbraith@suse.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
RCU grace-period initialization is currently carried out with interrupts
disabled, which can result in 200-microsecond latency spikes on systems
on which RCU has been configured for 4096 CPUs. This patch therefore
makes the RCU grace-period initialization be preemptible, which should
eliminate those latency spikes. Similar spikes from grace-period cleanup
and the forcing of quiescent states will be dealt with similarly by later
patches.
Reported-by: Mike Galbraith <mgalbraith@suse.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
The next step in reducing RCU's grace-period initialization latency on
large systems will make this initialization preemptible. Unfortunately,
making the grace-period initialization subject to interrupts (let alone
preemption) exposes the following race on systems whose rcu_node tree
contains more than one node:
1. CPU 31 starts initializing the grace period, including the
first leaf rcu_node structures, and is then preempted.
2. CPU 0 refers to the first leaf rcu_node structure, and notes
that a new grace period has started. It passes through a
quiescent state shortly thereafter, and informs the RCU core
of this rite of passage.
3. CPU 0 enters an RCU read-side critical section, acquiring
a pointer to an RCU-protected data item.
4. CPU 31 takes an interrupt whose handler removes the data item
referenced by CPU 0 from the data structure, and registers an
RCU callback in order to free it.
5. CPU 31 resumes initializing the grace period, including its
own rcu_node structure. In invokes rcu_start_gp_per_cpu(),
which advances all callbacks, including the one registered
in #4 above, to be handled by the current grace period.
6. The remaining CPUs pass through quiescent states and inform
the RCU core, but CPU 0 remains in its RCU read-side critical
section, still referencing the now-removed data item.
7. The grace period completes and all the callbacks are invoked,
including the one that frees the data item that CPU 0 is still
referencing. Oops!!!
One way to avoid this race is to remove grace-period acceleration from
rcu_start_gp_per_cpu(). Now, the only reason for this acceleration was
to allow CPUs bringing RCU out of idle state to have their callbacks
invoked after only one grace period, rather than the two grace periods
that would otherwise be required. But this acceleration does not
work when RCU grace-period initialization is moved to a kthread because
the CPU posting the callback is no longer necessarily the CPU that is
initializing the resulting grace period.
This commit therefore removes this now-pointless (and soon to be dangerous)
grace-period acceleration, thus avoiding the above race.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
|
|
As the first step towards allowing grace-period initialization to be
preemptible, this commit moves the RCU grace-period initialization
into its own kthread. This is needed to keep large-system scheduling
latency at reasonable levels.
Also change raw_spin_lock_irqsave() to raw_spin_lock_irq() as suggested
by Peter Zijlstra in review comments.
Reported-by: Mike Galbraith <mgalbraith@suse.de>
Reported-by: Dimitri Sivanich <sivanich@sgi.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
|
|
Each grace period is supposed to have at least one callback waiting
for that grace period to complete. However, if CONFIG_NO_HZ=n, an
extra callback-free grace period is no big problem -- it will chew up
a tiny bit of CPU time, but it will complete normally. In contrast,
CONFIG_NO_HZ=y kernels have the potential for all the CPUs to go to
sleep indefinitely, in turn indefinitely delaying completion of the
callback-free grace period. Given that nothing is waiting on this grace
period, this is also not a problem.
That is, unless RCU CPU stall warnings are also enabled, as they are
in recent kernels. In this case, if a CPU wakes up after at least one
minute of inactivity, an RCU CPU stall warning will result. The reason
that no one noticed until quite recently is that most systems have enough
OS noise that they will never remain absolutely idle for a full minute.
But there are some embedded systems with cut-down userspace configurations
that consistently get into this situation.
All this begs the question of exactly how a callback-free grace period
gets started in the first place. This can happen due to the fact that
CPUs do not necessarily agree on which grace period is in progress.
If a CPU still believes that the grace period that just completed is
still ongoing, it will believe that it has callbacks that need to wait for
another grace period, never mind the fact that the grace period that they
were waiting for just completed. This CPU can therefore erroneously
decide to start a new grace period. Note that this can happen in
TREE_RCU and TREE_PREEMPT_RCU even on a single-CPU system: Deadlock
considerations mean that the CPU that detected the end of the grace
period is not necessarily officially informed of this fact for some time.
Once this CPU notices that the earlier grace period completed, it will
invoke its callbacks. It then won't have any callbacks left. If no
other CPU has any callbacks, we now have a callback-free grace period.
This commit therefore makes CPUs check more carefully before starting a
new grace period. This new check relies on an array of tail pointers
into each CPU's list of callbacks. If the CPU is up to date on which
grace periods have completed, it checks to see if any callbacks follow
the RCU_DONE_TAIL segment, otherwise it checks to see if any callbacks
follow the RCU_WAIT_TAIL segment. The reason that this works is that
the RCU_WAIT_TAIL segment will be promoted to the RCU_DONE_TAIL segment
as soon as the CPU is officially notified that the old grace period
has ended.
This change is to cpu_needs_another_gp(), which is called in a number
of places. The only one that really matters is in rcu_start_gp(), where
the root rcu_node structure's ->lock is held, which prevents any
other CPU from starting or completing a grace period, so that the
comparison that determines whether the CPU is missing the completion
of a grace period is stable.
Reported-by: Becky Bruce <bgillbruce@gmail.com>
Reported-by: Subodh Nijsure <snijsure@grid-net.com>
Reported-by: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Paul Walmsley <paul@pwsan.com> # OMAP3730, OMAP4430
Cc: stable@vger.kernel.org
|
|
If we reset a vcpu on INIT, we so far overwrote dr7 as provided by
KVM_SET_GUEST_DEBUG, and we also cleared switch_db_regs unconditionally.
Fix this by saving the dr7 used for guest debugging and calculating the
effective register value as well as switch_db_regs on any potential
change. This will change to focus of the set_guest_debug vendor op to
update_dp_bp_intercept.
Found while trying to stop on start_secondary.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
To emulate level triggered interrupts, add a resample option to
KVM_IRQFD. When specified, a new resamplefd is provided that notifies
the user when the irqchip has been resampled by the VM. This may, for
instance, indicate an EOI. Also in this mode, posting of an interrupt
through an irqfd only asserts the interrupt. On resampling, the
interrupt is automatically de-asserted prior to user notification.
This enables level triggered interrupts to be posted and re-enabled
from vfio with no userspace intervention.
All resampling irqfds can make use of a single irq source ID, so we
reserve a new one for this interface.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
|
|
Passing struct snd_dma_buffer pointer instead, so that they work no
matter whether real SG buffer is used or not.
This is a preliminary work for the HD-audio DSP loader code.
Signed-off-by: Ian Minett <ian_minett@creativelabs.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
Suggested by Jan Engelhardt.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.linaro.org/people/ljones/linux-3.0-ux500 into next/dt
* 'for-arm-soc-next' of git://git.linaro.org/people/ljones/linux-3.0-ux500:
ARM: ux500: Fix SSP register address format
ARM: ux500: Apply tc3589x's GPIO/IRQ properties to HREF's DT
ARM: ux500: Remove redundant #gpio-cell properties from Snowball DT
ARM: ux500: Add all encompassing sound node to the HREF Device Tree
ARM: ux500: Add nodes for the MSP into the HREF Device Tree
ARM: ux500: Add all known I2C sub-device nodes to the HREF DT
ARM: ux500: Stop registering I2C sub-devices for HREF when DT is enabled
ARM: ux500: Stop registering Audio devices for HREF when DT is enabled
ARM: ux500: Add all encompassing sound node to the Snowball Device Tree
ARM: ux500: Add nodes for the MSP into Device Tree
ARM: ux500: Rename MSP board file to something more meaningful
ARM: ux500: Remove platform registration of MSP devices
ARM: ux500: Stop registering the MOP500 Audio driver from platform code
ARM: ux500: Pass MSP DMA platform data though AUXDATA
ARM: ux500: Fork MSP platform registration for step-by-step DT enablement
ARM: ux500: Add AB8500 CODEC node to DB8500 Device Tree
ARM: ux500: Clean-up MSP platform code
ARM: ux500: Pass SDI DMA information though AUX_DATA to MMCI
ARM: ux500: Add UART support to the HREF Device Tree
ARM: ux500: Add skeleton Device Tree for the HREF reference board
...
+ sync to v3.6-rc6
|
|
|
|
Signed-off-by: Len Brown <len.brown@intel.com>
|
|
When recording the number of SYNACK retransmits for servers using TCP
Fast Open, fix the code to ensure that we copy over the retransmit
count from the request_sock after we receive the ACK that completes
the 3-way handshake.
The story here is similar to that of SYNACK RTT
measurements. Previously we were always doing this in
tcp_v4_syn_recv_sock(). However, for TCP Fast Open connections
tcp_v4_conn_req_fastopen() calls tcp_v4_syn_recv_sock() at the time we
receive the SYN. So for TFO we must copy the final SYNACK retransmit
count in tcp_rcv_state_process().
Note that copying over the SYNACK retransmit count will give us the
correct count since, as is mentioned in a comment in
tcp_retransmit_timer(), before we receive an ACK for our SYN-ACK a TFO
passive connection does not retransmit anything else (e.g., data or
FIN segments).
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is unchanged version 20101221, plus a small bit in
DEFINE_ALTERNATE_TYPES to enable building with latest kernel headers.
This version finds dynamic tables exported by Linux in
/sys/firmware/acpi/tables/dynamic
Signed-off-by: Len Brown <len.brown@intel.com>
|
|
This is unchanged version 20071116, plus a small bit in
DEFINE_ALTERNATE_TYPES to enable building with latest kernel headers.
Signed-off-by: Len Brown <len.brown@intel.com>
|
|
This is unchanged version 20070714, plus a small bit in
DEFINE_ALTERNATE_TYPES to enable building with latest kernel headers.
Signed-off-by: Len Brown <len.brown@intel.com>
|
|
This is unchanged version 20060606, plus a small bit in
DEFINE_ALTERNATE_TYPES to enable building with latest kernel headers.
Signed-off-by: Len Brown <len.brown@intel.com>
|
|
This is unchanged version 20051111, plus a small bit in
DEFINE_ALTERNATE_TYPES to enable building with latest kernel headers.
Signed-off-by: Len Brown <len.brown@intel.com>
|
|
we need to grab mutex before the reference counter reaches 0
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
normally we deal with lock_mount()/umount races by checking that
mountpoint to be is still in our namespace after lock_mount() has
been done. However, do_add_mount() skips that check when called
with MNT_SHRINKABLE in flags (i.e. from finish_automount()). The
reason is that ->mnt_ns may be a temporary namespace created exactly
to contain automounts a-la NFS4 referral handling. It's not the
namespace of the caller, though, so check_mnt() would fail here.
We still need to check that ->mnt_ns is non-NULL in that case,
though.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
* stable/late-swiotlb.v3.3:
xen/swiotlb: Fix compile warnings when using plain integer instead of NULL pointer.
xen/swiotlb: Remove functions not needed anymore.
xen/pcifront: Use Xen-SWIOTLB when initting if required.
xen/swiotlb: For early initialization, return zero on success.
xen/swiotlb: Use the swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV PCI is used.
xen/swiotlb: Move the error strings to its own function.
xen/swiotlb: Move the nr_tbl determination in its own function.
swiotlb: add the late swiotlb initialization function with iotlb memory
xen/swiotlb: With more than 4GB on 64-bit, disable the native SWIOTLB.
xen/swiotlb: Simplify the logic.
Conflicts:
arch/x86/xen/pci-swiotlb-xen.c
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
|
|
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
|
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
|
We're now using isa_virt_to_bus(), and there really
isn't a generic and consistent test for whether a
platform provides this interface or not.
This driver is also for an x86-only device.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The platform data was moved, but this file was introduced in parallel
so didn't get caught in the sweeping changes. Fix it up now.
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
This moves a few of the newly introduced dtb targets to the common
dts/Makefile instead of the per-platform file.
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
git://git.infradead.org/users/jcooper/linux into late/kirkwood
From Jason Cooper:
New drivers:
- pinctrl (dove, kirkwood, mvebu)
- gpio (mvebu)
* 'kirkwood/drivers' of git://git.infradead.org/users/jcooper/linux:
arm: mvebu: add gpio support in defconfig
arm: mvebu: add DT information for GPIO banks on Armada 370 and XP
arm: mvebu: use GPIO support now that a driver is available
Documentation: add description of DT binding for the gpio-mvebu driver
gpio: introduce gpio-mvebu driver for Marvell SoCs
arm: mvebu: select the pinctrl drivers for Armada 370 and Armada XP platforms
arm: mvebu: split Kconfig options for Armada 370 and XP
ARM: mvebu: adjust Armada XP evaluation board DTS
ARM: mvebu: Add pinctrl support to Armada 370 SoC
ARM: mvebu: Add pinctrl support to Armada XP SoCs
pinctrl: mvebu: add pinctrl driver for Armada XP
pinctrl: mvebu: add pinctrl driver for Armada 370
pinctrl: mvebu: kirkwood pinctrl driver
pinctrl: mvebu: dove pinctrl driver
pinctrl: mvebu: pinctrl driver core
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
git://git.infradead.org/users/jcooper/linux into late/kirkwood
* 'kirkwood/addr_decode' of git://git.infradead.org/users/jcooper/linux:
arm: mvebu: add address decoding controller to the DT
arm: mvebu: add basic address decoding support to Armada 370/XP
arm: plat-orion: make bridge_virt_base non-const to support DT use case
arm: plat-orion: introduce PLAT_ORION_LEGACY hidden config option
arm: plat-orion: use void __iomem pointers for addr-map functions
arm: plat-orion: use void __iomem pointers for time functions
arm: plat-orion: use void __iomem pointers for MPP functions
arm: plat-orion: use void __iomem pointers for UART registration functions
arm: mach-mvebu: use IOMEM() for base address definitions
arm: mach-orion5x: use IOMEM() for base address definitions
arm: mach-mv78xx0: use IOMEM() for base address definitions
arm: mach-kirkwood: use IOMEM() for base address definitions
arm: mach-dove: use IOMEM() for base address definitions
arm: mach-orion5x: use plus instead of or for address definitions
arm: mach-mv78xx0: use plus instead of or for address definitions
arm: mach-kirkwood: use plus instead of or for address definitions
arm: mach-dove: use plus instead of or for address definitions
This branch had quite a few conflicts, in particular with the PCI static
map rework from Rob Herring, and a few other context conflicts due to
changes in Kconfig, etc.
I fixed up conflicts in:
arch/arm/Kconfig
arch/arm/mach-dove/common.c
arch/arm/mach-dove/include/mach/dove.h
arch/arm/mach-kirkwood/common.c
arch/arm/mach-kirkwood/include/mach/kirkwood.h
arch/arm/mach-mv78xx0/common.c
arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
arch/arm/mach-orion5x/common.c
arch/arm/mach-orion5x/include/mach/orion5x.h
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
Exceptions can now be matched and we can branch according to the
possible cases:
a. match in the set if the element is not flagged as "nomatch"
b. match in the set if the element is flagged with "nomatch"
c. no match
i.e.
iptables ... -m set --match-set ... -j ...
iptables ... -m set --match-set ... --nomatch-entries -j ...
...
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
|
|
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
|
|
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
|
|
Now it is possible to setup a single hash:net,iface type of set and
a single ip6?tables match which covers all egress/ingress filtering.
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
|
|
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
|
|
git://git.infradead.org/users/jcooper/linux into late/kirkwood
From Jason Cooper:
Misc:
- trim includes for board-dnskw.c
* 'kirkwood/cleanup' of git://git.infradead.org/users/jcooper/linux:
ARM: kirkwood: Trim excess #includes in board-dnskw.c
|
|
into late/kirkwood
From Jason Cooper:
New bindings:
- iconnect nand and keys
- mv_cesa
- gpio-fan
* 'kirkwood/dt' of git://git.infradead.org/users/jcooper/linux:
ARM: kirkwood: Use devicetree to define DNS-32[05] fan
hwmon: Add devicetree bindings to gpio-fan
Crypto: CESA: Add support for DT based instantiation.
ARM: Kirkwood: Describe iconnect nand in DT.
ARM: Kirkwood: Describe iconnect keys in DT.
|
|
git://git.infradead.org/users/jcooper/linux into late/kirkwood
From Jason Cooper:
defconfig:
- update kirkwood_defconfig via 'make oldconfig'
- Add all Kirkwood DT boards to the defconfig
- enable SERIAL_OF_PLATFORM and ORION_WATCHDOG in kirkwood_defconfig
* 'kirkwood/defconfig' of git://git.infradead.org/users/jcooper/linux:
ARM: Kirkwood: add DT boards to defconfig
ARM: Kirkwood: update defconfig
|
|
git://git.infradead.org/users/jcooper/linux into late/kirkwood
* 'kirkwood/boards' of git://git.infradead.org/users/jcooper/linux:
ARM: Dove: allow PCI to be disabled
ARM: dove: SolidRun CuBox DT
ARM: dove: add device tree descriptors
ARM: dove: add device tree based machine descriptor
ARM: dove: add crypto engine
ARM: dove: add clock gating control
ARM: dove: unify clock setup
ARM: initial DTS support for km_kirkwood
arm: add documentation describing Marvell families of SoC
ARM: kirkwood: DT descriptor for Seagate FreeAgent Dockstar
ARM: kirkwood: DT board setup for Seagate FreeAgent Dockstar
ARM: Kirkwood: Iomega ix2-200 DT support
Context conflicts in arch/arm/Kconfig and arch/arm/mach-dove/common.c.
The new device trees added to arch/arm/mach-kirkwood/Makefile.boot are
kept and dealt with in a separate changeset, since moving them out to
the new Makefile in this merge commit doesn't work well.
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
By Arnd Bergmann (15) and David Brown (1)
* next/multiplatform:
ARM: msm: Move core.h contents into common.h
ARM: spear: move platform_data definitions
ARM: samsung: move platform_data definitions
ARM: orion: move platform_data definitions
ARM: nomadik: move platform_data definitions
ARM: w90x900: move platform_data definitions
ARM: vt8500: move platform_data definitions
ARM: tegra: move sdhci platform_data definition
ARM: sa1100: move platform_data definitions
ARM: pxa: move platform_data definitions
ARM: netx: move platform_data definitions
ARM: msm: move platform_data definitions
ARM: imx: move platform_data definitions
ARM: ep93xx: move platform_data definitions
ARM: davinci: move platform_data definitions
ARM: at91: move platform_data definitions
|
|
By Arnd Bergmann (21) and Wei Yongjun (1)
via Olof Johansson (2) and Haojian Zhuang (1)
* next/cleanup: (22 commits)
ARM: mmp: using for_each_set_bit to simplify the code
net: seeq: use __iomem pointers for MMIO
video: da8xx-fb: use __iomem pointers for MMIO
scsi: eesox: use __iomem pointers for MMIO
serial: ks8695: use __iomem pointers for MMIO
input: rpcmouse: use __iomem pointers for MMIO
ARM: samsung: use __iomem pointers for MMIO
ARM: spear13xx: use __iomem pointers for MMIO
ARM: sa1100: use __iomem pointers for MMIO
ARM: prima2: use __iomem pointers for MMIO
ARM: nomadik: use __iomem pointers for MMIO
ARM: msm: use __iomem pointers for MMIO
ARM: lpc32xx: use __iomem pointers for MMIO
ARM: ks8695: use __iomem pointers for MMIO
ARM: ixp4xx: use __iomem pointers for MMIO
ARM: iop32x: use __iomem pointers for MMIO
ARM: iop13xx: use __iomem pointers for MMIO
ARM: integrator: use __iomem pointers for MMIO
ARM: imx: use __iomem pointers for MMIO
ARM: ebsa110: use __iomem pointers for MMIO
...
|
|
When PPPOE is running over a virtual ethernet interface (e.g., a
bonding interface) and the user tries to delete the interface in case
the PPPOE state is ZOMBIE, the kernel will loop forever while
unregistering net_device for the reference count is not decreased to
zero which should have been done with dev_put().
Signed-off-by: Xiaodong Xu <stid.smth@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pull MIPS fixes from Ralf Baechle:
"Random fixes across arch/mips, essentially.
One fix for an issue in get_user_pages_fast() which previously was
discovered on x86, a miscalculation in the support for the MIPS MT
hardware multithreading support, the RTC support for the Malta and a
fix for a spurious interrupt issue that seems to bite only very
special Malta configurations."
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
MIPS: Malta: Don't crash on spurious interrupt.
MIPS: Malta: Remove RTC Data Mode bootstrap breakage
MIPS: mm: Add compound tail page _mapcount when mapped
MIPS: CMP/SMTC: Fix tc_id calculation
|
|
A TCP Fast Open (TFO) passive connection must call both
tcp_check_req() and tcp_validate_incoming() for all incoming ACKs that
are attempting to complete the 3WHS.
This is needed to parallel all the action that happens for a non-TFO
connection, where for an ACK that is attempting to complete the 3WHS
we call both tcp_check_req() and tcp_validate_incoming().
For example, upon receiving the ACK that completes the 3WHS, we need
to call tcp_fast_parse_options() and update ts_recent based on the
incoming timestamp value in the ACK.
One symptom of the problem with the previous code was that for passive
TFO connections using TCP timestamps, the outgoing TS ecr values
ignored the incoming TS val value on the ACK that completed the 3WHS.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Previously, when using TCP Fast Open a server would return from
tcp_check_req() before updating snt_synack based on TCP timestamp echo
replies and whether or not we've retransmitted the SYNACK. The result
was that (a) for TFO connections using timestamps we used an incorrect
baseline SYNACK send time (tcp_time_stamp of SYNACK send instead of
rcv_tsecr), and (b) for TFO connections that do not have TCP
timestamps but retransmit the SYNACK we took a SYNACK RTT sample when
we should not take a sample.
This fix merely moves the snt_synack update logic a bit earlier in the
function, so that connections using TCP Fast Open will properly do
these updates when the ACK for the SYNACK arrives.
Moving this snt_synack update logic means that with TCP_DEFER_ACCEPT
enabled we do a few instructions of wasted work on each bare ACK, but
that seems OK.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|