summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-05-02parallel lookups: actual switch to rwsemAl Viro
ta-da! The main issue is the lack of down_write_killable(), so the places like readdir.c switched to plain inode_lock(); once killable variants of rwsem primitives appear, that'll be dealt with. lockdep side also might need more work Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02parallel lookups machinery, part 4 (and last)Al Viro
If we *do* run into an in-lookup match, we need to wait for it to cease being in-lookup. Fortunately, we do have unused space in in-lookup dentries - d_lru is never looked at until it stops being in-lookup. So we can stash a pointer to wait_queue_head from stack frame of the caller of ->lookup(). Some precautions are needed while waiting, but it's not that hard - we do hold a reference to dentry we are waiting for, so it can't go away. If it's found to be in-lookup the wait_queue_head is still alive and will remain so at least while ->d_lock is held. Moreover, the condition we are waiting for becomes true at the same point where everything on that wq gets woken up, so we can just add ourselves to the queue once. d_alloc_parallel() gets a pointer to wait_queue_head_t from its caller; lookup_slow() adjusted, d_add_ci() taught to use d_alloc_parallel() if the dentry passed to it happens to be in-lookup one (i.e. if it's been called from the parallel lookup). That's pretty much it - all that remains is to switch ->i_mutex to rwsem and have lookup_slow() take it shared. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02parallel lookups machinery, part 3Al Viro
We will need to be able to check if there is an in-lookup dentry with matching parent/name. Right now it's impossible, but as soon as start locking directories shared such beasts will appear. Add a secondary hash for locating those. Hash chains go through the same space where d_alias will be once it's not in-lookup anymore. Search is done under the same bitlock we use for modifications - with the primary hash we can rely on d_rehash() into the wrong chain being the worst that could happen, but here the pointers are buggered once it's removed from the chain. On the other hand, the chains are not going to be long and normally we'll end up adding to the chain anyway. That allows us to avoid bothering with ->d_lock when doing the comparisons - everything is stable until removed from chain. New helper: d_alloc_parallel(). Right now it allocates, verifies that no hashed and in-lookup matches exist and adds to in-lookup hash. Returns ERR_PTR() for error, hashed match (in the unlikely case it's been found) or new dentry. In-lookup matches trigger BUG() for now; that will change in the next commit when we introduce waiting for ongoing lookup to finish. Note that in-lookup matches won't be possible until we actually go for shared locking. lookup_slow() switched to use of d_alloc_parallel(). Again, these commits are separated only for making it easier to review. All this machinery will start doing something useful only when we go for shared locking; it's just that the combination is too large for my taste. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02parallel lookups machinery, part 2Al Viro
We'll need to verify that there's neither a hashed nor in-lookup dentry with desired parent/name before adding to in-lookup set. One possible solution would be to hold the parent's ->d_lock through both checks, but while the in-lookup set is relatively small at any time, dcache is not. And holding the parent's ->d_lock through something like __d_lookup_rcu() would suck too badly. So we leave the parent's ->d_lock alone, which means that we watch out for the following scenario: * we verify that there's no hashed match * existing in-lookup match gets hashed by another process * we verify that there's no in-lookup matches and decide that everything's fine. Solution: per-directory kinda-sorta seqlock, bumped around the times we hash something that used to be in-lookup or move (and hash) something in place of in-lookup. Then the above would turn into * read the counter * do dcache lookup * if no matches found, check for in-lookup matches * if there had been none of those either, check if the counter has changed; repeat if it has. The "kinda-sorta" part is due to the fact that we don't have much spare space in inode. There is a spare word (shared with i_bdev/i_cdev/i_pipe), so the counter part is not a problem, but spinlock is a different story. We could use the parent's ->d_lock, and it would be less painful in terms of contention, for __d_add() it would be rather inconvenient to grab; we could do that (using lock_parent()), but... Fortunately, we can get serialization on the counter itself, and it might be a good idea in general; we can use cmpxchg() in a loop to get from even to odd and smp_store_release() from odd to even. This commit adds the counter and updating logics; the readers will be added in the next commit. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02beginning of transition to parallel lookups - marking in-lookup dentriesAl Viro
marked as such when (would be) parallel lookup is about to pass them to actual ->lookup(); unmarked when * __d_add() is about to make it hashed, positive or not. * __d_move() (from d_splice_alias(), directly or via __d_unalias()) puts a preexisting dentry in its place * in caller of ->lookup() if it has escaped all of the above. Bug (WARN_ON, actually) if it reaches the final dput() or d_instantiate() while still marked such. As the result, we are guaranteed that for as long as the flag is set, dentry will * remain negative unhashed with positive refcount * never have its ->d_alias looked at * never have its ->d_lru looked at * never have its ->d_parent and ->d_name changed Right now we have at most one such for any given parent directory. With parallel lookups that restriction will weaken to * only exist when parent is locked shared * at most one with given (parent,name) pair (comparison of names is according to ->d_compare()) * only exist when there's no hashed dentry with the same (parent,name) Transition will take the next several commits; unfortunately, we'll only be able to switch to rwsem at the end of this series. The reason for not making it a single patch is to simplify review. New primitives: d_in_lookup() (a predicate checking if dentry is in the in-lookup state) and d_lookup_done() (tells the system that we are done with lookup and if it's still marked as in-lookup, it should cease to be such). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02__d_add(): don't drop/regain ->d_lockAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02lookup_slow(): bugger off on IS_DEADDIR() from the very beginningAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02nfs: missing wakeup in nfs_unblock_sillyrename()Al Viro
will be needed as soon as lookups are not serialized by ->i_mutex Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02make ext2_get_page() and friends work without external serializationAl Viro
Right now ext2_get_page() (and its analogues in a bunch of other filesystems) relies upon the directory being locked - the way it sets and tests Checked and Error bits would be racy without that. Switch to a slightly different scheme, _not_ setting Checked in case of failure. That way the logics becomes if Checked => OK else if Error => fail else if !validate => fail else => OK with validation setting Checked or Error on success and failure resp. and returning which one had happened. Equivalent to the current logics, but unlike the current logics not sensitive to the order of set_bit, test_bit getting reordered by CPU, etc. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02ovl_lookup_real(): use lookup_one_len_unlocked()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02reconnect_one(): use lookup_one_len_unlocked()Al Viro
... and explain the non-obvious logics in case when lookup yields a different dentry. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02reiserfs: open-code reiserfs_mutex_lock_safe() in reiserfs_unpack()Al Viro
... and have it use inode_lock() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02orangefs: don't open-code inode_lock/inode_unlockAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02ocfs2: don't open-code inode_lock/inode_unlockAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02configfs_detach_prep(): make sure that wait_mutex won't go awayAl Viro
grab a reference to dentry we'd got the sucker from, and return that dentry via *wait, rather than just returning the address of ->i_mutex. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02kernfs: use lookup_one_len_unlocked()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02security_d_instantiate(): move to the point prior to attaching dentry to inodeAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-02Merge getxattr prototype change into work.lookupsAl Viro
The rest of work.xattr stuff isn't needed for this branch
2016-05-02Merge branch 'ipv6-tunnel-cleanups'David S. Miller
Tom Herbert says: ==================== net: Cleanup IPv6 ip tunnels The IPv6 tunnel code is very different from IPv4 code. There is a lot of redundancy with the IPv4 code, particularly in the GRE tunneling. This patch set cleans up the tunnel code to make the IPv6 code look more like the IPv4 code and use common functions between the two stacks where possible. This work should make it easier to maintain and extend the IPv6 ip tunnels. Items in this patch set: - Cleanup IPv6 tunnel receive path (ip6_tnl_rcv). Includes using gro_cells and exporting ip6_tnl_rcv so the ip6_gre can call it - Move GRE functions to common header file (tx functions) or gre_demux.c (rx functions like gre_parse_header) - Call common GRE functions from IPv6 GRE - Create ip6_tnl_xmit (to be like ip_tunnel_xmit) Tested: Ran super_netperf tests for TCP_RR and TCP_STREAM for: - IPv4 over gre, gretap, gre6, gre6tap - IPv6 over gre, gretap, gre6, gre6tap - ipip - ip6ip6 - ipip/gue - IPv6 over gre/gue - IPv4 over gre/gue ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02gre6: Cleanup GREv6 transmit path, call common GRE functionsTom Herbert
Changes in GREv6 transmit path: - Call gre_checksum, remove gre6_checksum - Rename ip6gre_xmit2 to __gre6_xmit - Call gre_build_header utility function - Call ip6_tnl_xmit common function - Call ip6_tnl_change_mtu, eliminate ip6gre_tunnel_change_mtu Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02ipv6: Generic tunnel cleanupTom Herbert
A few generic changes to generalize tunnels in IPv6: - Export ip6_tnl_change_mtu so that it can be called by ip6_gre - Add tun_hlen to ip6_tnl structure. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02gre: Create common functions for transmitTom Herbert
Create common functions for both IPv4 and IPv6 GRE in transmit. These are put into gre.h. Common functions are for: - GRE checksum calculation. Move gre_checksum to gre.h. - Building a GRE header. Move GRE build_header and rename gre_build_header. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02ipv6: Create ip6_tnl_xmitTom Herbert
This patch renames ip6_tnl_xmit2 to ip6_tnl_xmit and exports it. Other users like GRE will be able to call this. The original ip6_tnl_xmit function is renamed to ip6_tnl_start_xmit (this is an ndo_start_xmit function). Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02gre6: Cleanup GREv6 receive path, call common GRE functionsTom Herbert
- Create gre_rcv function. This calls gre_parse_header and ip6gre_rcv. - Call ip6_tnl_rcv. Doing this and using gre_parse_header eliminates most of the code in ip6gre_rcv. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02gre: Move utility functions to common headersTom Herbert
Several of the GRE functions defined in net/ipv4/ip_gre.c are usable for IPv6 GRE implementation (that is they are protocol agnostic). These include: - GRE flag handling functions are move to gre.h - GRE build_header is moved to gre.h and renamed gre_build_header - parse_gre_header is moved to gre_demux.c and renamed gre_parse_header - iptunnel_pull_header is taken out of gre_parse_header. This is now done by caller. The header length is returned from gre_parse_header in an int* argument. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02ipv6: Cleanup IPv6 tunnel receive pathTom Herbert
Some basic changes to make IPv6 tunnel receive path look more like IPv4 path: - Make ip6_tnl_rcv non-static so that GREv6 and others can call it - Make ip6_tnl_rcv look like ip_tunnel_rcv - Switch to gro_cells_receive - Make ip6_tnl_rcv non-static and export it Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02Merge branch 'tcp-preempt'David S. Miller
Eric Dumazet says: ==================== net: make TCP preemptible Most of TCP stack assumed it was running from BH handler. This is great for most things, as TCP behavior is very sensitive to scheduling artifacts. However, the prequeue and backlog processing are problematic, as they need to be flushed with BH being blocked. To cope with modern needs, TCP sockets have big sk_rcvbuf values, in the order of 16 MB, and soon 32 MB. This means that backlog can hold thousands of packets, and things like TCP coalescing or collapsing on this amount of packets can lead to insane latency spikes, since BH are blocked for too long. It is time to make UDP/TCP stacks preemptible. Note that fast path still runs from BH handler. v2: Added "tcp: make tcp_sendmsg() aware of socket backlog" to reduce latency problems of large sends. v3: Fixed a typo in tcp_cdg.c ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02tcp: make tcp_sendmsg() aware of socket backlogEric Dumazet
Large sendmsg()/write() hold socket lock for the duration of the call, unless sk->sk_sndbuf limit is hit. This is bad because incoming packets are parked into socket backlog for a long time. Critical decisions like fast retransmit might be delayed. Receivers have to maintain a big out of order queue with additional cpu overhead, and also possible stalls in TX once windows are full. Bidirectional flows are particularly hurt since the backlog can become quite big if the copy from user space triggers IO (page faults) Some applications learnt to use sendmsg() (or sendmmsg()) with small chunks to avoid this issue. Kernel should know better, right ? Add a generic sk_flush_backlog() helper and use it right before a new skb is allocated. Typically we put 64KB of payload per skb (unless MSG_EOR is requested) and checking socket backlog every 64KB gives good results. As a matter of fact, tests with TSO/GSO disabled give very nice results, as we manage to keep a small write queue and smaller perceived rtt. Note that sk_flush_backlog() maintains socket ownership, so is not equivalent to a {release_sock(sk); lock_sock(sk);}, to ensure implicit atomicity rules that sendmsg() was giving to (possibly buggy) applications. In this simple implementation, I chose to not call tcp_release_cb(), but we might consider this later. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02net: do not block BH while processing socket backlogEric Dumazet
Socket backlog processing is a major latency source. With current TCP socket sk_rcvbuf limits, I have sampled __release_sock() holding cpu for more than 5 ms, and packets being dropped by the NIC once ring buffer is filled. All users are now ready to be called from process context, we can unblock BH and let interrupts be serviced faster. cond_resched_softirq() could be removed, as it has no more user. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02sctp: prepare for socket backlog behavior changeEric Dumazet
sctp_inq_push() will soon be called without BH being blocked when generic socket code flushes the socket backlog. It is very possible SCTP can be converted to not rely on BH, but this needs to be done by SCTP experts. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02udp: prepare for non BH masking at backlog processingEric Dumazet
UDP uses the generic socket backlog code, and this will soon be changed to not disable BH when protocol is called back. We need to use appropriate SNMP accessors. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02dccp: do not assume DCCP code is non preemptibleEric Dumazet
DCCP uses the generic backlog code, and this will soon be changed to not disable BH when protocol is called back. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02tcp: do not block bh during prequeue processingEric Dumazet
AFAIK, nothing in current TCP stack absolutely wants BH being disabled once socket is owned by a thread running in process context. As mentioned in my prior patch ("tcp: give prequeue mode some care"), processing a batch of packets might take time, better not block BH at all. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02tcp: do not assume TCP code is non preemptibleEric Dumazet
We want to to make TCP stack preemptible, as draining prequeue and backlog queues can take lot of time. Many SNMP updates were assuming that BH (and preemption) was disabled. Need to convert some __NET_INC_STATS() calls to NET_INC_STATS() and some __TCP_INC_STATS() to TCP_INC_STATS() Before using this_cpu_ptr(net->ipv4.tcp_sk) in tcp_v4_send_reset() and tcp_v4_send_ack(), we add an explicit preempt disabled section. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02wext: remove a/b/g/n from SIOCGIWNAMEJohannes Berg
Since a/b/g/n no longer exist as spec amendements and VHT (ex 802.11ac) wasn't handled at all, it's better to just remove the amendment strings to avoid confusion. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Reviewed-by: Luca Coelho <luciano.coelho@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2016-05-02Merge branch 'xgene-channel-number'David S. Miller
Iyappan Subramanian says: ==================== drivers: net: xgene: fix: Get channel number from device binding This patch set adds 'channel' property to get ethernet to CPU channel number, thus decoupling the Linux driver from static resource selection. v2: Address review comments from v1 - removed irq reference from Linux driver - added 'channel' property to get ethernet to CPU channel number v1: - Initial version ==================== Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02dtb: xgene: Add channel propertyIyappan Subramanian
Added 'channel' property, describing ethernet to CPU channel number. Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02Documentation: dtb: xgene: Add channel propertyIyappan Subramanian
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02drivers: net: xgene: Get channel number from device bindingIyappan Subramanian
This patch gets ethernet to CPU channel (prefetch buffer number) from the newly added 'channel' property, thus decoupling Linux driver from resource management. Signed-off-by: Iyappan Subramanian <isubramanian@apm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-02Minimal fix-up of bad hashing behavior of hash_64()Linus Torvalds
This is a fairly minimal fixup to the horribly bad behavior of hash_64() with certain input patterns. In particular, because the multiplicative value used for the 64-bit hash was intentionally bit-sparse (so that the multiply could be done with shifts and adds on architectures without hardware multipliers), some bits did not get spread out very much. In particular, certain fairly common bit ranges in the input (roughly bits 12-20: commonly with the most information in them when you hash things like byte offsets in files or memory that have block factors that mean that the low bits are often zero) would not necessarily show up much in the result. There's a bigger patch-series brewing to fix up things more completely, but this is the fairly minimal fix for the 64-bit hashing problem. It simply picks a much better constant multiplier, spreading the bits out a lot better. NOTE! For 32-bit architectures, the bad old hash_64() remains the same for now, since 64-bit multiplies are expensive. The bigger hashing cleanup will replace the 32-bit case with something better. The new constants were picked by George Spelvin who wrote that bigger cleanup series. I just picked out the constants and part of the comment from that series. Cc: stable@vger.kernel.org Cc: George Spelvin <linux@horizon.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-02Merge tag 'md/4.6-rc6-fix' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/shli/md Pull MD fixes from Shaohua Li: "This update includes several trival fixes. The only important one is to fix MD bio merge, which has big performance impact" * tag 'md/4.6-rc6-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md: raid5: delete unnecessary warnning MD: make bio mergeable md/raid0: remove empty line printk from dump_zones md/raid0: fix uninitialized variable bug
2016-05-02PCI: Do not treat EPROBE_DEFER as device attach failureLukas Wunner
Linux 4.5 introduced a behavioral change in device probing during the suspend process with commit 013c074f8642 ("PM / sleep: prohibit devices probing during suspend/hibernation"): It defers device probing during the entire suspend process, starting from the prepare phase and ending with the complete phase. A rule existed before that "we rely on subsystems not to do any probing once a device is suspended" but it is enforced only now (Alan Stern, https://lkml.org/lkml/2015/9/15/908). This resulted in a WARN splat if a PCI device (e.g., Thunderbolt) is plugged in while the system is asleep: Upon waking up, pciehp_resume() discovers new devices in the resume phase and immediately tries to bind them to a driver. Since probing is now deferred, device_attach() returns -EPROBE_DEFER, which provoked a WARN in pci_bus_add_device(). Linux 4.6-rc1 aggravates the situation with commit ab1a187bba5c ("PCI: Check device_attach() return value always"): If device_attach() returns a negative value, pci_bus_add_device() now removes the sysfs and procfs entries for the device and pci_bus_add_devices() subsequently locks up with a BUG. Even with the BUG fixed we're still in trouble because the device remains on the deferred probing list even though its sysfs and procfs entries are gone and its children won't be added. Fix by not interpreting -EPROBE_DEFER as failure. The device will be probed eventually (through device_unblock_probing() in dpm_complete()) and there is proper locking in place to avoid races (e.g., if devices are unplugged again und thus deleted from the system before deferred probing happens, I have tested this). Also, those functions which dereference dev->driver (e.g. pci_pm_*()) do contain proper NULL pointer checks. So it seems safe to ignore -EPROBE_DEFER. Fixes: ab1a187bba5c ("PCI: Check device_attach() return value always") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Grygorii Strashko <grygorii.strashko@ti.com> Cc: Alan Stern <stern@rowland.harvard.edu>
2016-05-02PCI: Fix BUG on device attach failureLukas Wunner
Previously when pci_bus_add_device() called device_attach() and it returned a negative value, we emitted a WARN but carried on. Commit ab1a187bba5c ("PCI: Check device_attach() return value always"), introduced in Linux 4.6-rc1, changed this to unwind all steps preceding device_attach() and to not set dev->is_added = 1. The latter leads to a BUG if pci_bus_add_device() was called from pci_bus_add_devices(). Fix by not recursing to a child bus if device_attach() failed for the bridge leading to it. This can be triggered by plugging in a PCI device (e.g. Thunderbolt) while the system is asleep. The system locks up when woken because device_attach() returns -EPROBE_DEFER. Fixes: ab1a187bba5c ("PCI: Check device_attach() return value always") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-05-02EDAC, sb_edac: Use cpu family/model in driver detectionTony Luck
Instead of picking a random PCI ID from the dozen or so we need to access, just use x86_match_cpu() to pick based on CPU model number. The choosing of PCI devices has been problematic in the past, see 11249e739929 ("sb_edac: Fix detection on SNB machines") which fixed problems introduced by d0585cd815fa ("sb_edac: Claim a different PCI device"). This is especially ugly if future hardware might not even have EDAC-relevant registers in PCI config space and we would still be required to choose some "random" PCI devices to scan for just so our driver loads. Is this cleaner/clearer? It deletes much more code than it adds. Only tested on Broadwell. The driver loads/unloads and loads again. Still decodes errors too. Signed-off-by: Tony Luck <tony.luck@intel.com> Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Borislav Petkov <bp@suse.de>
2016-05-02Bluetooth: hci_intel: Fix null gpio desc pointer dereferenceLoic Poulain
gpiod_get_optional can return either ERR_PTR or NULL pointer. NULL case is not tested and then dereferenced later in desc_to_gpio. Fix this by using non optional version which returns ERR_PTR in any error case (this is not an optional gpio). Use the same non optional version for the host-wake gpio. Fixes: 765ea3abd116 ("Bluetooth: hci_intel: Retrieve host-wake IRQ") Signed-off-by: Loic Poulain <loic.poulain@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2016-05-02btmrvl: add platform specific wakeup interrupt supportXinming Hu
On some arm-based platforms, we need to configure platform specific parameters by device tree node and also define our node as a child node of parent SDIO host controller. This patch parses these parameters from device tree. It includes calibration data download to firmware, wakeup pin configured to firmware, and soc specific wake up gpio, which will be set as wakeup interrupt pin. Signed-off-by: Xinming Hu <huxm@marvell.com> Signed-off-by: Amitkumar Karwar <akarwar@marvell.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2016-05-02dt: bindings: add MARVELL's bt-sd8xxx wireless deviceXinming Hu
Add device tree binding documentation for MARVELL's bluetooth sdio (sd8897 and sd8997) chip. Signed-off-by: Xinming Hu <huxm@marvell.com> Signed-off-by: Amitkumar Karwar <akarwar@marvell.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2016-05-02Merge branch 'for_linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs Pull UDF fix from Jan Kara: "A fix of a regression in UDF that got introduced in 4.6-rc1 by one of the charset encoding fixes" * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs: udf: Fix conversion of 'dstring' fields to UTF8
2016-05-02Merge tag 'gpio-v4.6-4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio Pull GPIO fixes from Linus Walleij: "Here are some late but important fixes for the v4.6 kernel series. ACPI and RCAR, so two driver fixes (PM related) and a self-evident string lookup fix for ACPI GPIOs: - A serious ACPI fix targeted for stable: lookup strings were being free'd. - Revert two patches from the RCAR driver" * tag 'gpio-v4.6-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpiolib-acpi: Duplicate con_id string when adding it to the crs lookup list Revert "gpio: rcar: Fine-grained Runtime PM support" Revert "gpio: rcar: Add Runtime PM handling for interrupts"
2016-05-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds
Pull networking fixes from David Miller: 1) MODULE_FIRMWARE firmware string not correct for iwlwifi 8000 chips, from Sara Sharon. 2) Fix SKB size checks in batman-adv stack on receive, from Sven Eckelmann. 3) Leak fix on mac80211 interface add error paths, from Johannes Berg. 4) Cannot invoke napi_disable() with BH disabled in myri10ge driver, fix from Stanislaw Gruszka. 5) Fix sign extension problem when computing feature masks in net_gso_ok(), from Marcelo Ricardo Leitner. 6) lan78xx driver doesn't count packets and packet lengths in its statistics properly, fix from Woojung Huh. 7) Fix the buffer allocation sizes in pegasus USB driver, from Petko Manolov. 8) Fix refcount overflows in bpf, from Alexei Starovoitov. 9) Unified dst cache handling introduced a preempt warning in ip_tunnel, fix by resetting rather then setting the cached route. From Paolo Abeni. 10) Listener hash collision test fix in soreuseport, from Craig Gallak * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (47 commits) gre: do not pull header in ICMP error processing net: Implement net_dbg_ratelimited() for CONFIG_DYNAMIC_DEBUG case tipc: only process unicast on intended node cxgb3: fix out of bounds read net/smscx5xx: use the device tree for mac address soreuseport: Fix TCP listener hash collision net: l2tp: fix reversed udp6 checksum flags ip_tunnel: fix preempt warning in ip tunnel creation/updating samples/bpf: fix trace_output example bpf: fix check_map_func_compatibility logic bpf: fix refcnt overflow drivers: net: cpsw: use of_phy_connect() in fixed-link case dt: cpsw: phy-handle, phy_id, and fixed-link are mutually exclusive drivers: net: cpsw: don't ignore phy-mode if phy-handle is used drivers: net: cpsw: fix segfault in case of bad phy-handle drivers: net: cpsw: fix parsing of phy-handle DT property in dual_emac config MAINTAINERS: net: Change maintainer for GRETH 10/100/1G Ethernet MAC device driver gre: reject GUE and FOU in collect metadata mode pegasus: fixes reported packet length pegasus: fixes URB buffer allocation size; ...