Age | Commit message (Collapse) | Author |
|
The mcb bus' device member wasn't correctly initialized and thus wasn't placed
correctly into the driver model.
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Andreas Werner <andreas.werner@men.de>
Tested-by: Andreas Werner <andreas.werner@men.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This driver adds support for the STM CoreSight IP block, allowing any
system compoment (HW or SW) to log and aggregate messages via a
single entity.
The CoreSight STM exposes an application defined number of channels
called stimulus port. Configuration is done using entries in sysfs
and channels made available to userspace via configfs.
Signed-off-by: Pratik Patel <pratikp@codeaurora.org>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Reviewed-by: Michael Williams <michael.williams@arm.com>
Signed-off-by: Chunyan Zhang <zhang.chunyan@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Some STM devices adjust software assigned master numbers depending on
the trace source and its runtime state and whatnot. This patch adds
a sysfs attribute to inform the trace-side software that master numbers
assigned to software sources will not match those in the STP stream,
so that, for example, master/channel allocation policy can be adjusted
accordingly.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy into usb-testing
Kishon writes:
phy: for 4.7
*) Add a new PHY driver for USB2 PHY on Northstar SoC
*) Add support for Broadcom NS2 SATA3 PHY in existing
Broadcom SATA3 PHY driver
*) Add support for MIPI DPHYs in Exynos5420-compatible
(5420, 5422 and 5800) and Exynos5433 SoCs
*) Add support for USB3 PHY on mt2701
*) Add extcon support for Renesas R-car USB2 PHY driver
*) Misc cleanups
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
|
|
When a USB driver is bound to an interface (either through probing or
by claiming it) or is unbound from an interface, the USB core always
disables Link Power Management during the transition and then
re-enables it afterward. The reason is because the driver might want
to prevent hub-initiated link power transitions, in which case the HCD
would have to recalculate the various LPM parameters. This
recalculation takes place when LPM is re-enabled and the new
parameters are sent to the device and its parent hub.
However, if the driver does not want to prevent hub-initiated link
power transitions then none of this work is necessary. The parameters
don't need to be recalculated, and LPM doesn't need to be disabled and
re-enabled.
It turns out that disabling and enabling LPM can be time-consuming,
enough so that it interferes with user programs that want to claim and
release interfaces rapidly via usbfs. Since the usbfs kernel driver
doesn't set the disable_hub_initiated_lpm flag, we can speed things up
and get the user programs to work by leaving LPM alone whenever the
flag isn't set.
And while we're improving the way disable_hub_initiated_lpm gets used,
let's also fix its kerneldoc.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Matthew Giassa <matthew@giassa.net>
CC: Mathias Nyman <mathias.nyman@intel.com>
CC: <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
In the sendmsg function of UDP, raw, ICMP and l2tp sockets, we use local
variables like hlimits, tclass, opt and dontfrag and pass them to corresponding
functions like ip6_make_skb, ip6_append_data and xxx_push_pending_frames.
This is not a good practice and makes it hard to add new parameters.
This fix introduces a new struct ipcm6_cookie similar to ipcm_cookie in
ipv4 and include the above mentioned variables. And we only pass the
pointer to this structure to corresponding functions. This makes it easier
to add new parameters in the future and makes the function cleaner.
Signed-off-by: Wei Wang <weiwan@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Hosts sending lot of ACK packets exhibit high sock_wfree() cost
because of cache line miss to test SOCK_USE_WRITE_QUEUE
We could move this flag close to sk_wmem_alloc but it is better
to perform the atomic_sub_and_test() on a clean cache line,
as it avoid one extra bus transaction.
skb_orphan_partial() can also have a fast track for packets that either
are TCP acks, or already went through another skb_orphan_partial()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We need to perform an additional check on the inner headers to determine if
we can offload the checksum for them. Previously this check didn't occur
so we would generate an invalid frame in the case of an IPv6 header
encapsulated inside of an IPv4 tunnel. To fix this I added a secondary
check to vxlan_features_check so that we can verify that we can offload the
inner checksum.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Whilst commit 9439eb3ab9d1 ("asm-generic: io: implement relaxed
accessor macros as conditional wrappers") makes the *_relaxed forms of
I/O accessors universally available to drivers, in cases where writeq()
is implemented via the io-64-nonatomic helpers, writeq_relaxed() will
end up falling back to writel() regardless of whether writel_relaxed()
is available (identically for s/write/read/).
Add corresponding relaxed forms of the nonatomic helpers to delegate
to the equivalent 32-bit accessors as appropriate. We also need to fix
io.h to avoid defining default relaxed variants if the basic accessors
themselves don't exist.
CC: Christoph Hellwig <hch@lst.de>
CC: Darren Hart <dvhart@linux.intel.com>
CC: Hitoshi Mitake <mitake.hitoshi@lab.ntt.co.jp>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
* pci/dpc:
PCI: Add Downstream Port Containment driver
PCI: Add Downstream Port Containment portdrv service type
PCI: Widen portdrv service type from 4 bits to 8 bits
* pci/resource:
alpha/PCI: Call iomem_is_exclusive() for IORESOURCE_MEM, but not IORESOURCE_IO
PCI: Supply CPU physical address (not bus address) to iomem_is_exclusive()
* pci/thunderbolt:
thunderbolt: Fix double free of drom buffer
|
|
In presence of inelastic flows and stress, we can call
fq_codel_drop() for every packet entering fq_codel qdisc.
fq_codel_drop() is quite expensive, as it does a linear scan
of 4 KB of memory to find a fat flow.
Once found, it drops the oldest packet of this flow.
Instead of dropping a single packet, try to drop 50% of the backlog
of this fat flow, with a configurable limit of 64 packets per round.
TCA_FQ_CODEL_DROP_BATCH_SIZE is the new attribute to make this
limit configurable.
With this strategy the 4 KB search is amortized to a single cache line
per drop [1], so fq_codel_drop() no longer appears at the top of kernel
profile in presence of few inelastic flows.
[1] Assuming a 64byte cache line, and 1024 buckets
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dave Taht <dave.taht@gmail.com>
Cc: Jonathan Morton <chromatix99@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Dave Taht
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
'pci/host-imx6', 'pci/host-keystone', 'pci/host-mvebu', 'pci/host-rcar', 'pci/host-thunder' and 'pci/host-vmd' into next
* pci/host-armada:
PCI: armada: Add driver for Marvell Armada 7K/8K PCIe controller
dt-bindings: pci: add DT binding for Marvell Armada 7K/8K PCIe controller
* pci/host-designware:
PCI: designware: Remove incorrect RC memory base/limit configuration
PCI: designware: Move Root Complex setup code to dw_pcie_setup_rc()
* pci/host-hv:
PCI: hv: Report resources release after stopping the bus
* pci/host-imx6:
ARM: dts: imx6qp: Specify imx6qp version of PCIe core
PCI: imx6: Implement reset sequence for i.MX6+
PCI: imx6: Use enum instead of bool for variant indicator
PCI: imx6: Add DT property for link gen, default to Gen1
PCI: imx6: Add reset-gpio-active-high boolean property to DT
ARM: dts: imx6: Fix PCIe reset GPIO polarity on Toradex Apalis Ixora
PCI: imx6: Add initial imx6sx support
PCI: imx6: Factor out ref clock enable
Revert "PCI: imx6: Add support for active-low reset GPIO"
* pci/host-keystone:
PCI: keystone: Remove unnecessary goto statement
PCI: keystone: Add error IRQ handler
* pci/host-mvebu:
PCI: mvebu: Use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS for mvebu_pcie_pm_ops
PCI: mvebu: Constify mvebu_pcie_pm_ops structure
* pci/host-rcar:
PCI: rcar: Select PCI_MSI_IRQ_DOMAIN
* pci/host-thunder:
PCI: thunder: Don't clobber read-only bits in bridge config registers
* pci/host-vmd:
PCI: Remove return values from pcie_port_platform_notify() and relatives
PCI/ACPI: Allow all PCIe services on non-ACPI host bridges
|
|
Add driver for the PCI Express Downstream Port Containment extended
capability. DPC is an optional capability to contain uncorrectable errors
below a port.
For more information on DPC, please see PCI Express Base Specification
Revision 4, section 7.31, or view the PCI-SIG DPC ECN here:
https://pcisig.com/sites/default/files/specification_documents/ECN_DPC_2012-02-09_finalized.pdf
When a DPC event is triggered, the hardware disables downstream links, so
the DPC driver schedules removal for all devices below this port. This may
happen concurrently with a PCIe hotplug driver if enabled. When all
downstream devices are removed and the link state transitions to disabled,
the DPC driver clears the DPC status and interrupt bits so the link may
retrain for a newly connected device.
[bhelgaas: clear (not set) DPC_CTL bits on remove, whitespace cleanup]
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Lukas Wunner <lukas@wunner.de>
|
|
Add the Downstream Port Containment (PCIE_PORT_SERVICE_DPC) portdrv service
type, available if the device has the DPC extended capability.
[bhelgaas: split to separate patch, changelog]
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
Currently the PWM core mixes the current PWM state with the per-platform
reference config (specified through the PWM lookup table, DT definition
or directly hardcoded in PWM drivers).
Create a struct pwm_args to store this reference configuration, so that
PWM users can differentiate between the current and reference
configurations.
Patch all places where pwm->args should be initialized. We keep the
pwm_set_polarity/period() calls until all PWM users are patched to use
pwm_args instead of pwm_get_period/polarity().
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
[thierry.reding@gmail.com: reword kerneldoc comments]
Signed-off-by: Thierry Reding <thierry.reding@gmail.com>
|
|
The only call of arch_timer_get_timecounter (in KVM) has been removed.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
|
|
Currently, the firmware tables are parsed 2 times: once in the GIC
drivers, the other time when initializing the vGIC. It means code
duplication and make more tedious to add the support for another
firmware table (like ACPI).
Use the recently introduced helper gic_get_kvm_info() to get
information about the virtual GIC.
With this change, the virtual GIC becomes agnostic to the firmware
table and KVM will be able to initialize the vGIC on ACPI.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
|
|
Fill up the recently introduced gic_kvm_info with the hardware
information used for virtualization.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
|
|
For now, the firmware tables are parsed 2 times: once in the GIC
drivers, the other timer when initializing the vGIC. It means code
duplication and make more tedious to add the support for another
firmware table (like ACPI).
Introduce a new structure and set of helpers to get/set the virtual GIC
information. Also fill up the structure for GICv2.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
|
|
Currently, the firmware table is parsed by the virtual timer code in
order to retrieve the virtual timer interrupt. However, this is already
done by the arch timer driver.
To avoid code duplication, extend arch_timer_kvm_info to get the virtual
IRQ.
Note that the KVM code will be modified in a subsequent patch.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
|
|
Introduce a structure which are filled up by the arch timer driver and
used by the virtual timer in KVM.
The first member of this structure will be the timecounter. More members
will be added later.
A stub for the new helper isn't introduced because KVM requires the arch
timer for both ARM64 and ARM32.
The function arch_timer_get_timecounter is kept for the time being and
will be dropped in a subsequent patch.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
|
|
This patch implements the SS_AUTODISARM flag that can be OR-ed with
SS_ONSTACK when forming ss_flags.
When this flag is set, sigaltstack will be disabled when entering
the signal handler; more precisely, after saving sas to uc_stack.
When leaving the signal handler, the sigaltstack is restored by
uc_stack.
When this flag is used, it is safe to switch from sighandler with
swapcontext(). Without this flag, the subsequent signal will corrupt
the state of the switched-away sighandler.
To detect the support of this functionality, one can do:
err = sigaltstack(SS_DISABLE | SS_AUTODISARM);
if (err && errno == EINVAL)
unsupported();
Signed-off-by: Stas Sergeev <stsp@list.ru>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Aleksa Sarai <cyphar@cyphar.com>
Cc: Amanieu d'Antras <amanieu@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Jason Low <jason.low2@hp.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Moore <pmoore@redhat.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Cc: linux-api@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1460665206-13646-4-git-send-email-stsp@list.ru
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
This patch adds SS_FLAG_BITS - the mask that splits sigaltstack
mode values and bit-flags. Since there is no bit-flags yet, the
mask is defined to 0. The flags are added by subsequent patches.
With every new flag, the mask should have the appropriate bit cleared.
This makes sure if some flag is tried on a kernel that doesn't
support it, the -EINVAL error will be returned, because such a
flag will be treated as an invalid mode rather than the bit-flag.
That way the existence of the particular features can be probed
at run-time.
This change was suggested by Andy Lutomirski:
https://lkml.org/lkml/2016/3/6/158
Signed-off-by: Stas Sergeev <stsp@list.ru>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Amanieu d'Antras <amanieu@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Cc: linux-api@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1460665206-13646-3-git-send-email-stsp@list.ru
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Locally generated TCP GSO packets having to go through a GRE/SIT/IPIP
tunnel have to go through an expensive skb_unclone()
Reallocating skb->head is a lot of work.
Test should really check if a 'real clone' of the packet was done.
TCP does not care if the original gso_type is changed while the packet
travels in the stack.
This adds skb_header_unclone() which is a variant of skb_clone()
using skb_header_cloned() check instead of skb_cloned().
This variant can probably be used from other points.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
- trans_timeout is incremented when tx queue timed out (tx watchdog).
- tx_maxrate is set via sysfs
Moving tx_maxrate to read-mostly part shrinks the struct by 64 bytes.
While at it, also move trans_timeout (it is out-of-place in the
'write-mostly' part).
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a new LINK_XSTATS_TYPE_BRIDGE attribute and implement the
RTM_GETSTATS callbacks for IFLA_STATS_LINK_XSTATS (fill_linkxstats and
get_linkxstats_size) in order to export the per-vlan stats.
The paddings were added because soon these fields will be needed for
per-port per-vlan stats (or something else if someone beats me to it) so
avoiding at least a few more netlink attributes.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for per-VLAN Tx/Rx statistics. Every global vlan context gets
allocated a per-cpu stats which is then set in each per-port vlan context
for quick access. The br_allowed_ingress() common function is used to
account for Rx packets and the br_handle_vlan() common function is used
to account for Tx packets. Stats accounting is performed only if the
bridge-wide vlan_stats_enabled option is set either via sysfs or netlink.
A struct hole between vlan_enabled and vlan_proto is used for the new
option so it is in the same cache line. Currently it is binary (on/off)
but it is intentionally restricted to exactly 0 and 1 since other values
will be used in the future for different purposes (e.g. per-port stats).
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add callbacks to calculate the size and fill link extended statistics
which can be split into multiple messages and are dumped via the new
rtnl stats API (RTM_GETSTATS) with the IFLA_STATS_LINK_XSTATS attribute.
Also add that attribute to the idx mask check since it is expected to
be able to save state and resume dumping (e.g. future bridge per-vlan
stats will be dumped via this attribute and callbacks).
Each link type should nest its private attributes under the per-link type
attribute. This allows to have any number of separated private attributes
and to avoid one call to get the dev link type.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds the new passive governor for DEVFREQ framework. The following
governors are already present and used for DVFS (Dynamic Voltage and Frequency
Scaling) drivers. The following governors are independently used for one device
driver which don't give the influence to other device drviers and also don't
receive the effect from other device drivers.
- ondemand / performance / powersave / userspace
The passive governor depends on operation of parent driver with specific
governos extremely and is not able to decide the new frequency by oneself.
According to the decided new frequency of parent driver with governor,
the passive governor uses it to decide the appropriate frequency for own
device driver. The passive governor must need the following information
from device tree:
- the source clock and OPP tables
- the instance of parent device
For exameple,
there are one more devfreq device drivers which need to change their source
clock according to their utilization on runtime. But, they share the same
power line (e.g., regulator). So, specific device driver is operated as parent
with ondemand governor and then the rest device driver with passive governor
is influenced by parent device.
Suggested-by: Myungjoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
[tjakobi: Reported RCU locking issue and cw00.choi fix it]
Reported-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
[linux.amoon: Reported possible recursive locking and cw00.choi fix it]
Reported-by: Anand Moon <linux.amoon@gmail.com>
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Acked-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
|
|
This patch adds the new DEVFREQ_TRANSITION_NOTIFIER notifier to send
the notification when the frequency of device is changed.
This notifier has two state as following:
- DEVFREQ_PRECHANGE : Notify it before chaning the frequency of device
- DEVFREQ_POSTCHANGE : Notify it after changed the frequency of device
And this patch adds the resourced-managed function to release the resource
automatically when error happen.
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
[m.reichl and linux.amoon: Tested it on exynos4412-odroidu3 board]
Tested-by: Markus Reichl <m.reichl@fivetechno.de>
Tested-by: Anand Moon <linux.amoon@gmail.com>
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Acked-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
|
|
This patch adds the new devfreq_get_devfreq_by_phandle() OF helper function
which can find the instance of devfreq device by using phandle ("devfreq").
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
[m.reichl and linux.amoon: Tested it on exynos4412-odroidu3 board]
Tested-by: Markus Reichl <m.reichl@fivetechno.de>
Tested-by: Anand Moon <linux.amoon@gmail.com>
Acked-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
|
|
Nothing sets TRACE_EVENT_FL_USE_CALL_FILTER anymore. Remove it.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux into clk-next
Pull tegra clk driver changes from Thierry Reding:
This set of changes contains a bunch of cleanups and minor fixes along
with some new clocks, mainly on Tegra210, in preparation for supporting
DisplayPort and HDMI 2.0.
* tag 'tegra-for-4.7-clk' of git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux:
clk: tegra: dfll: Reformat CVB frequency table
clk: tegra: dfll: Properly clean up on failure and removal
clk: tegra: dfll: Make code more comprehensible
clk: tegra: dfll: Reference CVB table instead of copying data
clk: tegra: dfll: Update kerneldoc
clk: tegra: Fix PLL_U post divider and initial rate on Tegra30
clk: tegra: Initialize PLL_C to sane rate on Tegra30
clk: tegra: Fix pllre Tegra210 and add pll_re_out1
clk: tegra: Add sor_safe clock
clk: tegra: dpaux and dpaux1 are fixed factor clocks
clk: tegra: Add dpaux1 clock
clk: tegra: Use correct parent for dpaux clock
clk: tegra: Add fixed factor peripheral clock type
clk: tegra: Special-case mipi-cal parent on Tegra114
clk: tegra: Remove trailing blank line
clk: tegra: Constify peripheral clock registers
clk: tegra: Add interface to enable hardware control of SATA/XUSB PLLs
|
|
|
|
New method: ->iterate_shared(). Same arguments as in ->iterate(),
called with the directory locked only shared. Once all filesystems
switch, the old one will be gone.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
same as read() on regular files has, and for the same reason.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
ta-da!
The main issue is the lack of down_write_killable(), so the places
like readdir.c switched to plain inode_lock(); once killable
variants of rwsem primitives appear, that'll be dealt with.
lockdep side also might need more work
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
If we *do* run into an in-lookup match, we need to wait for it to
cease being in-lookup. Fortunately, we do have unused space in
in-lookup dentries - d_lru is never looked at until it stops being
in-lookup.
So we can stash a pointer to wait_queue_head from stack frame of
the caller of ->lookup(). Some precautions are needed while
waiting, but it's not that hard - we do hold a reference to dentry
we are waiting for, so it can't go away. If it's found to be
in-lookup the wait_queue_head is still alive and will remain so
at least while ->d_lock is held. Moreover, the condition we
are waiting for becomes true at the same point where everything
on that wq gets woken up, so we can just add ourselves to the
queue once.
d_alloc_parallel() gets a pointer to wait_queue_head_t from its
caller; lookup_slow() adjusted, d_add_ci() taught to use
d_alloc_parallel() if the dentry passed to it happens to be
in-lookup one (i.e. if it's been called from the parallel lookup).
That's pretty much it - all that remains is to switch ->i_mutex
to rwsem and have lookup_slow() take it shared.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
We will need to be able to check if there is an in-lookup
dentry with matching parent/name. Right now it's impossible,
but as soon as start locking directories shared such beasts
will appear.
Add a secondary hash for locating those. Hash chains go through
the same space where d_alias will be once it's not in-lookup anymore.
Search is done under the same bitlock we use for modifications -
with the primary hash we can rely on d_rehash() into the wrong
chain being the worst that could happen, but here the pointers are
buggered once it's removed from the chain. On the other hand,
the chains are not going to be long and normally we'll end up
adding to the chain anyway. That allows us to avoid bothering with
->d_lock when doing the comparisons - everything is stable until
removed from chain.
New helper: d_alloc_parallel(). Right now it allocates, verifies
that no hashed and in-lookup matches exist and adds to in-lookup
hash.
Returns ERR_PTR() for error, hashed match (in the unlikely case it's
been found) or new dentry. In-lookup matches trigger BUG() for
now; that will change in the next commit when we introduce waiting
for ongoing lookup to finish. Note that in-lookup matches won't be
possible until we actually go for shared locking.
lookup_slow() switched to use of d_alloc_parallel().
Again, these commits are separated only for making it easier to
review. All this machinery will start doing something useful only
when we go for shared locking; it's just that the combination is
too large for my taste.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
We'll need to verify that there's neither a hashed nor in-lookup
dentry with desired parent/name before adding to in-lookup set.
One possible solution would be to hold the parent's ->d_lock through
both checks, but while the in-lookup set is relatively small at any
time, dcache is not. And holding the parent's ->d_lock through
something like __d_lookup_rcu() would suck too badly.
So we leave the parent's ->d_lock alone, which means that we watch
out for the following scenario:
* we verify that there's no hashed match
* existing in-lookup match gets hashed by another process
* we verify that there's no in-lookup matches and decide
that everything's fine.
Solution: per-directory kinda-sorta seqlock, bumped around the times
we hash something that used to be in-lookup or move (and hash)
something in place of in-lookup. Then the above would turn into
* read the counter
* do dcache lookup
* if no matches found, check for in-lookup matches
* if there had been none of those either, check if the
counter has changed; repeat if it has.
The "kinda-sorta" part is due to the fact that we don't have much spare
space in inode. There is a spare word (shared with i_bdev/i_cdev/i_pipe),
so the counter part is not a problem, but spinlock is a different story.
We could use the parent's ->d_lock, and it would be less painful in
terms of contention, for __d_add() it would be rather inconvenient to
grab; we could do that (using lock_parent()), but...
Fortunately, we can get serialization on the counter itself, and it
might be a good idea in general; we can use cmpxchg() in a loop to
get from even to odd and smp_store_release() from odd to even.
This commit adds the counter and updating logics; the readers will be
added in the next commit.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
marked as such when (would be) parallel lookup is about to pass them
to actual ->lookup(); unmarked when
* __d_add() is about to make it hashed, positive or not.
* __d_move() (from d_splice_alias(), directly or via
__d_unalias()) puts a preexisting dentry in its place
* in caller of ->lookup() if it has escaped all of the
above. Bug (WARN_ON, actually) if it reaches the final dput()
or d_instantiate() while still marked such.
As the result, we are guaranteed that for as long as the flag is
set, dentry will
* remain negative unhashed with positive refcount
* never have its ->d_alias looked at
* never have its ->d_lru looked at
* never have its ->d_parent and ->d_name changed
Right now we have at most one such for any given parent directory.
With parallel lookups that restriction will weaken to
* only exist when parent is locked shared
* at most one with given (parent,name) pair (comparison of
names is according to ->d_compare())
* only exist when there's no hashed dentry with the same
(parent,name)
Transition will take the next several commits; unfortunately, we'll
only be able to switch to rwsem at the end of this series. The
reason for not making it a single patch is to simplify review.
New primitives: d_in_lookup() (a predicate checking if dentry is in
the in-lookup state) and d_lookup_done() (tells the system that
we are done with lookup and if it's still marked as in-lookup, it
should cease to be such).
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
The rest of work.xattr stuff isn't needed for this branch
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip into clk-next
Pull rockchip clk updates from Heiko Stuebner:
A spelling fix and a bunch of rk3399 clock fixes.
* tag 'v4.7-rockchip-clk3' of git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip:
clk: rockchip: fix the rk3399 cifout clock
clk: rockchip: drop unnecessary CLK_IGNORE_UNUSED flags from rk3399
clk: rockchip: add some frequencies on the rk3399 PLL table
clk: rockchip: assign more necessary rk3399 clock ids
clk: rockchip: export some necessary rk3399 clock ids
clk: rockchip: rename rga clock-id on rk3399
clk: rockchip: add general gpu soft-reset on rk3399
clk: rockchip: fix the gate bit for i2c4 and i2c8 on rk3399
clk: rockchip: fix of spelling mistake on unsuccessful in pll clock type
|
|
A few generic changes to generalize tunnels in IPv6:
- Export ip6_tnl_change_mtu so that it can be called by ip6_gre
- Add tun_hlen to ip6_tnl structure.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Create common functions for both IPv4 and IPv6 GRE in transmit. These
are put into gre.h.
Common functions are for:
- GRE checksum calculation. Move gre_checksum to gre.h.
- Building a GRE header. Move GRE build_header and rename
gre_build_header.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch renames ip6_tnl_xmit2 to ip6_tnl_xmit and exports it. Other
users like GRE will be able to call this. The original ip6_tnl_xmit
function is renamed to ip6_tnl_start_xmit (this is an ndo_start_xmit
function).
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Several of the GRE functions defined in net/ipv4/ip_gre.c are usable
for IPv6 GRE implementation (that is they are protocol agnostic).
These include:
- GRE flag handling functions are move to gre.h
- GRE build_header is moved to gre.h and renamed gre_build_header
- parse_gre_header is moved to gre_demux.c and renamed gre_parse_header
- iptunnel_pull_header is taken out of gre_parse_header. This is now
done by caller. The header length is returned from gre_parse_header
in an int* argument.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Some basic changes to make IPv6 tunnel receive path look more like
IPv4 path:
- Make ip6_tnl_rcv non-static so that GREv6 and others can call it
- Make ip6_tnl_rcv look like ip_tunnel_rcv
- Switch to gro_cells_receive
- Make ip6_tnl_rcv non-static and export it
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Large sendmsg()/write() hold socket lock for the duration of the call,
unless sk->sk_sndbuf limit is hit. This is bad because incoming packets
are parked into socket backlog for a long time.
Critical decisions like fast retransmit might be delayed.
Receivers have to maintain a big out of order queue with additional cpu
overhead, and also possible stalls in TX once windows are full.
Bidirectional flows are particularly hurt since the backlog can become
quite big if the copy from user space triggers IO (page faults)
Some applications learnt to use sendmsg() (or sendmmsg()) with small
chunks to avoid this issue.
Kernel should know better, right ?
Add a generic sk_flush_backlog() helper and use it right
before a new skb is allocated. Typically we put 64KB of payload
per skb (unless MSG_EOR is requested) and checking socket backlog
every 64KB gives good results.
As a matter of fact, tests with TSO/GSO disabled give very nice
results, as we manage to keep a small write queue and smaller
perceived rtt.
Note that sk_flush_backlog() maintains socket ownership,
so is not equivalent to a {release_sock(sk); lock_sock(sk);},
to ensure implicit atomicity rules that sendmsg() was
giving to (possibly buggy) applications.
In this simple implementation, I chose to not call tcp_release_cb(),
but we might consider this later.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is a fairly minimal fixup to the horribly bad behavior of hash_64()
with certain input patterns.
In particular, because the multiplicative value used for the 64-bit hash
was intentionally bit-sparse (so that the multiply could be done with
shifts and adds on architectures without hardware multipliers), some
bits did not get spread out very much. In particular, certain fairly
common bit ranges in the input (roughly bits 12-20: commonly with the
most information in them when you hash things like byte offsets in files
or memory that have block factors that mean that the low bits are often
zero) would not necessarily show up much in the result.
There's a bigger patch-series brewing to fix up things more completely,
but this is the fairly minimal fix for the 64-bit hashing problem. It
simply picks a much better constant multiplier, spreading the bits out a
lot better.
NOTE! For 32-bit architectures, the bad old hash_64() remains the same
for now, since 64-bit multiplies are expensive. The bigger hashing
cleanup will replace the 32-bit case with something better.
The new constants were picked by George Spelvin who wrote that bigger
cleanup series. I just picked out the constants and part of the comment
from that series.
Cc: stable@vger.kernel.org
Cc: George Spelvin <linux@horizon.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|