summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-05-04Merge branch 'mlx5-sriov-updates'David S. Miller
Saeed Mahameed says: ==================== Mellanox 100G ethernet SRIOV Upgrades This series introduces new features and upgrades for mlx5 etherenet SRIOV, while the first patch provides a bug fixes for a compilation issue introduced buy the previous aRFS series for when CONFIG_RFS_ACCEL=y and CONFIG_MLX5_CORE_EN=n. Changes from V0: - 1st patch: Don't add a new Kconfig flag. Instead, compile out en_arfs.c \ contents when CONFIG_RFS_ACCEL=n SRIOV upgrades: - Use synchronize_irq instead of the vport events spin_lock - Fix memory leak in error flow - Added full VST support - Spoofcheck support - Trusted VF promiscuous and allmulti support VST and Spoofcheck in details: - Adding Low level firmware commands support for creating ACLs (Access Control Lists) Flow tables. ACLs are regular flow tables with the only exception that they are bound to a specific e-Switch vport (VF) and they can be one of two types > egress ACL: filters traffic going from e-Switch to VF. > ingress ACL: filters traffic going from VF to e-Switch. - Ingress/Egress ACLs (per vport) for VF VST mode filtering. - Ingress/Egress ACLs (per vport) for VF spoofcheck filtering. - Ingress/Egress ACLs (per vport) configuration: > Created only when at least one of (VST, spoofcheck) is configured. > if (!spoofchk && !vst) allow all traffic. i.e. no ACLs. > if (spoofchk && vst) allow only untagged traffic with smac=original mac \ sent from the VF. > if (spoofchk && !vst) allow only traffic with smac=original mac sent from \ the VF. > if (!spoofchk && vst) allow only untagged traffic. Trusted VF promiscuous and allmulti support in details: - Added two flow groups for allmulti and promisc VFs to the e-Switch FDB table > Allmulti group: One rule that forwards any mcast traffic coming from either uplink or VFs/PF vports. > Promisc group: One rule that forwards all unmatched traffic coming from \ uplink. - Add vport context change event handling for promisc and allmulti If VF is trusted respect the request and: > if allmulti request: add the vport to the allmulti group. and to all other L2 mcast address in the FDB table. > if promisc request: add the vport to the promisc group. > Note: A promisc VF can only see traffic that was not explicitly matched to or requested by any other VF. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Implement trust vf ndoMohamad Haj Yahia
- Add support to configure trusted vf attribute through trust_vf_ndo. - Upon VF trust setting change we update vport context to refresh allmulti/promisc or any trusted vf attributes that we didn't trust the VF for before. - Lock the eswitch state lock on vport event in order to synchronise the vport context updates , this will prevent contention with vport trust setting change which will trigger vport mac list update. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Implement promiscuous rx modes vf request handlingMohamad Haj Yahia
Add promisc_change as a trigger to vport context change event. Add set vport promisc/allmulti functions to add vport to promiscuous flowtable rules. Upon promisc/allmulti rx mode vf request add the vport to the relevant promiscuous group (Allmulti/Promisc group) so the relevant traffic will be forwarded to it. Upon allmulti vf request add the vport to each existing multicast fdb rule. Upon adding/removing mcast address from a vport, update all other allmulti vports. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Add promiscuous and allmulti FDB flowtable groupsMohamad Haj Yahia
Add promiscuous and allmulti steering groups in FDB table. Besides the full match L2 steering rules group, we added two more groups to catch the "miss" rules traffic: * Allmulti group: One rule that forwards any mcast traffic coming from either uplink or VFs/PF vports * Promisc group: One rule that forwards all unmatched traffic coming from uplink. Needed for downstream privileged VF promisc and allmulti support. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Use vport event handler for vport cleanupMohamad Haj Yahia
Remove the usage of explicit cleanup function and use existing vport change handler. Calling vport change handler while vport is disabled will cleanup the vport resources. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Enable/disable ACL tables on demandMohamad Haj Yahia
Enable ingress/egress ACL tables only when we need to configure ACL rules. Disable ingress/egress ACL tables once all ACL rules are removed. All VF outgoing/incoming traffic need to go through the ingress/egress ACL tables. Adding/Removing these tables on demand will save unnecessary hops in the flow steering when the ACL tables are empty. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Vport ingress/egress ACLs rules for spoofchkMohamad Haj Yahia
Configure ingress and egress vport ACL rules according to spoofchk admin parameters. Ingress ACL flow table rules: if (!spoofchk && !vst) allow all traffic. else : 1) one of the following rules : * if (spoofchk && vst) allow only untagged traffic with smac=original mac sent from the VF. * if (spoofchk && !vst) allow only traffic with smac=original mac sent from the VF. * if (!spoofchk && vst) allow only untagged traffic. 2) drop all traffic that didn't hit #1. Add support for set vf spoofchk ndo. Add non zero mac validation in case of spoofchk to set mac ndo: when setting new mac we need to validate that the new mac is not zero while the spoofchk is on because it is illegal combination. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Vport ingress/egress ACLs rules for VST modeMohamad Haj Yahia
Configure ingress and egress vport ACL rules according to vlan and qos admin parameters. Ingress ACL flow table rules: 1) drop any tagged packet sent from the VF 2) allow other traffic (default behavior) Egress ACL flow table rules: 1) allow only tagged traffic with vlan_tag=vst_vid. 2) drop other traffic. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Introduce VST vport ingress/egress ACLsMohamad Haj Yahia
Create egress/ingress ACLs per VF vport at vport enable. Ingress ACL: - one flow group to drop all tagged traffic in VST mode. Egress ACL: - one flow group that allows only untagged traffic with smac that is equals to the original mac (anti-spoofing). - one flow group that allows only untagged traffic. - one flow group that allows only smac that is equals to the original mac (anti-spoofing). (note: only one of the above group has active rule) - star rule will be used to drop all other traffic. By default no rules are generated, unless VST is explicitly requested. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Fix error flow memory leakMohamad Haj Yahia
Fix memory leak in case query nic vport command failed. Fixes: 81848731ff40 ('net/mlx5: E-Switch, Add SR-IOV (FDB) support') Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: E-Switch, Replace vport spin lock with synchronize_irq()Mohamad Haj Yahia
Vport spin lock can be replaced with synchronize_irq() in the right place, this will remove the need of locking inside irq context. Locking in esw_enable_vport is not required since vport events are yet to be enabled, and at esw_disable_vport it is sufficient to synchronize_irq() to guarantee no further vport events handlers will be scheduled. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5: Flow steering, Add vport ACL supportMohamad Haj Yahia
Update the relevant flow steering device structs and commands to support vport. Update the flow steering core API to receive vport number. Add ingress and egress ACL flow table name spaces. Add ACL flow table support: * ACL (Access Control List) flow table is a table that contains only allow/drop steering rules. * We have two types of ACL flow tables - ingress and egress. * ACLs handle traffic sent from/to E-Switch FDB table, Ingress refers to traffic sent from Vport to E-Switch and Egress refers to traffic sent from E-Switch to vport. * Ingress ACL flow table allow/drop rules is checked against traffic sent from VF. * Egress ACL flow table allow/drop rules is checked against traffic sent to VF. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5e: Fix aRFS compilation dependencyMaor Gottlieb
en_arfs.o should be compiled only if both CONFIG_MLX5_CORE_EN and CONFIG_RFS_ACCEL are enabled. en_arfs calls to rps_may_expire_flow which is compiled only if CONFIG_RFS_ACCEL is defined. Move en_arfs.o compilation dependency to be under CONFIG_MLX5_CORE_EN and wrap the en_arfs.c content with ifdef of CONFIG_RFS_ACCEL. Fixes: 1cabe6b0965e ('net/mlx5e: Create aRFS flow tables') Signed-off-by: Maor Gottlieb <maorg@mellanox.com> Reported-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04ecryptfs: fix handling of directory openingAl Viro
First of all, trying to open them r/w is idiocy; it's guaranteed to fail. Moreover, assigning ->f_pos and assuming that everything will work is blatantly broken - try that with e.g. tmpfs as underlying layer and watch the fireworks. There may be a non-trivial amount of state associated with current IO position, well beyond the numeric offset. Using the single struct file associated with underlying inode is really not a good idea; we ought to open one for each ecryptfs directory struct file. Additionally, file_operations both for directories and non-directories are full of pointless methods; non-directories should *not* have ->iterate(), directories should not have ->flush(), ->fasync() and ->splice_read(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-05-04drivers: net: xgene: Fix error handlingMatthias Brugger
When probe bails out with an error, we try to unregister the netdev before we have even registered it. Fix the goto statements for that. Signed-off-by: Matthias Brugger <mbrugger@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04Merge tag 'for-linus-4.6-rc6-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen regression fixes from David Vrabel: - Fix two regressions causing crashes in 32-bit PV guests - Fix a regression in the evtchn driver * tag 'for-linus-4.6-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/evtchn: fix ring resize when binding new events xen/balloon: Fix crash when ballooning on x86 32 bit PAE xen: Fix page <-> pfn conversion on 32 bit systems
2016-05-04iwlwifi: mvm: don't override the rate with the AMSDU lenEmmanuel Grumbach
The TSO code creates A-MSDUs from a single large send. Each A-MSDU is an skb and skb->len doesn't include the number of bytes which need to be added for the headers being added (subframe header, TCP header, IP header, SNAP, padding). To be able to set the right value in the Tx command, we put the number of bytes added by those headers in driver_data in iwl_mvm_tx_tso and use this value in iwl_mvm_set_tx_cmd. The problem by setting this value in driver_data is that it overrides the ieee80211_tx_info. The bug manifested itself when we send P2P related frames in CCK since the rate in ieee80211_tx_info is zero-ed. This of course is a violation of the P2P specification. To fix this, copy the original ieee80211_tx_info to the stack and pass it to the functions which need it. Assign the number of bytes added by the headers to the driver_data inside the skb itself. Fixes: a6d5e32f247c ("iwlwifi: mvm: send large SKBs to the transport") Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2016-05-04Merge branch 'cxgb4-mbox'David S. Miller
Hariprasad Shenai says: ==================== cxgb4: mbox enhancements for cxgb4 This patch series checks for firmware errors when we are waiting for mbox response in a loop and breaks out. When negative timeout is passed to mailbox code, don't sleep. Negative timeout is passed only from interrupt context. This patch series has been created against net-next tree and includes patches on cxgb4 driver. We have included all the maintainers of respective drivers. Kindly review the change and let us know in case of any review comments. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04cxgb4: Check for firmware errors in the mailbox command loopHariprasad Shenai
Check for firmware errors in the mailbox command loop and report them differently rather than simply timing out when the firmware goes belly up. Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04cxgb4: Don't sleep when mbox cmd is issued from interrupt contextHariprasad Shenai
When link goes down, from the interrupt handler DCB priority for the Tx queues needs to be unset. We issue mbox command to unset the Tx queue priority with negative timeout. In t4_wr_mbox_meat_timeout() do not sleep when negative timeout is passed, since it is called from interrupt context. Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04drivers: net: emac: add Atheros AR8035 phy initialization codeChristian Lamparter
This patch adds the phy initialization code for Qualcomm Atheros AR8035 phy. This configuration is found in the Cisco Meraki MR24. Signed-off-by: Christian Lamparter <chunkeey@googlemail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04Merge branch 'tunnel-features-and-gso-partial'David S. Miller
Alexander Duyck says: ==================== Fix Tunnel features and enable GSO partial for several drivers This patch series is meant to allow us to get the best performance possible for Mellanox ConnectX-3/4 and Broadcom NetXtreme-C/E adapters in terms of VXLAN and GRE tunnels. The first 3 patches address issues I found in regards to GSO_PARTIAL and TSO_MANGLEID. The next 4 patches go through and enable GSO_PARTIAL for VXLAN tunnels that have an outer checksum enabled, and then enable IPv6 support where I can. One outstanding issue is that I wasn't able to get offloads working with outer IPv6 headers on mlx4. However that wasn't a feature that was enabled before so it isn't technically a regression, however I believe Engineers from Mellanox said they would look into it since they thought it should be supported. The last patch enables GSO_PARTIAL for VXLAN and GRE tunnels on the bnxt driver. One piece of feedback I received on the patch was that the hardware has globally set IPv6 UDP tunnels to always have the checksum field computed. I plan to work with Broadcom to get that addressed so that we only populate the checksum field if it was requested by the network stack. v2: Rebased patches off of latest changes to the mlx4/mlx5 drivers. Added bnxt driver patch as I received feedback on the RFC. v3: Moved 2 patches into series for net as they were generic fixes. Added patch to disable GSO partial if frame is less than 2x size of MSS There are outstanding issues as called out above that need to be addressed, however they were present before these patches so it isn't as if they introduce a regression. In addition gains can be easily seen so there should be no issue with applying the driver patches while the IPv6 mlx4_en and bnxt issues are being researched. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04bnxt: Add support for segmentation of tunnels with outer checksumsAlexander Duyck
This patch assumes that the bnxt hardware will ignore existing IPv4/v6 header fields for length and checksum as well as the length and checksum fields for outer UDP and GRE headers. I have been told by Michael Chan that this is working. Though this might be somewhat redundant for IPv6 as they are forcing the checksum to be computed for all IPv6 frames that are offloaded. A follow-up patch may be necessary in order to fix this as it is essentially mangling the outer IPv6 headers to add a checksum where none was requested. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5e: Fix IPv6 tunnel checksum offloadAlexander Duyck
The mlx5 driver exposes support for TSO6 but not IPv6 csum for hardware encapsulated tunnels. This leads to issues as it triggers warnings in skb_checksum_help as it ends up being called as we report supporting the segmentation but not the checksumming for IPv6 frames. This patch corrects that and drops 2 features that don't actually need to be supported in hw_enc_features since they are Rx features and don't actually impact anything by being present in hw_enc_features. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx5e: Add support for UDP tunnel segmentation with outer checksum offloadAlexander Duyck
This patch assumes that the mlx5 hardware will ignore existing IPv4/v6 header fields for length and checksum as well as the length and checksum fields for outer UDP headers. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx4_en: Add support for inner IPv6 checksum offloads and TSOAlexander Duyck
>From what I can tell the ConnectX-3 will support an inner IPv6 checksum and segmentation offload, however it cannot support outer IPv6 headers. This assumption is based on the fact that I could see the checksum being offloaded for inner header on IPv4 tunnels, but not on IPv6 tunnels. For this reason I am adding the feature to the hw_enc_features and adding an extra check to the features_check call that will disable GSO and checksum offload in the case that the encapsulated frame has an outer IP version of that is not 4. The check in mlx4_en_features_check could be removed if at some point in the future a fix is found that allows the hardware to offload segmentation/checksum on tunnels with an outer IPv6 header. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net/mlx4_en: Add support for UDP tunnel segmentation with outer checksum offloadAlexander Duyck
This patch assumes that the mlx4 hardware will ignore existing IPv4/v6 header fields for length and checksum as well as the length and checksum fields for outer UDP headers. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04net: Fix netdev_fix_features so that TSO_MANGLEID is only available with TSOAlexander Duyck
This change makes it so that we will strip the TSO_MANGLEID bit if TSO is not present. This way we will also handle ECN correctly of TSO is not present. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04gso: Only allow GSO_PARTIAL if we can checksum the inner protocolAlexander Duyck
This patch addresses a possible issue that can occur if we get into any odd corner cases where we support TSO for a given protocol but not the checksum or scatter-gather offload. There are few drivers floating around that setup their tunnels this way and by enforcing the checksum piece we can avoid mangling any frames. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04gso: Do not perform partial GSO if number of partial segments is 1 or lessAlexander Duyck
In the event that the number of partial segments is equal to 1 we don't really need to perform partial segmentation offload. As such we should skip multiplying the MSS and instead just clear the partial_segs value since it will not provide any gain to advertise the frame as being GSO when it is a single frame. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04gre: change gre_parse_header to return the header lengthJiri Benc
It's easier for gre_parse_header to return the header length instead of filing it into a parameter. That way, the callers that don't care about the header length can just check whether the returned value is lower than zero. In gre_err, the tunnel header must not be pulled. See commit b7f8fe251e46 ("gre: do not pull header in ICMP error processing") for details. This patch reduces the conflict between the mentioned commit and commit 95f5c64c3c13 ("gre: Move utility functions to common headers"). Signed-off-by: Jiri Benc <jbenc@redhat.com> Acked-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04tcp: guarantee forward progress in tcp_sendmsg()Eric Dumazet
Under high rx pressure, it is possible tcp_sendmsg() never has a chance to allocate an skb and loop forever as sk_flush_backlog() would always return true. Fix this by calling sk_flush_backlog() only if one skb had been allocated and filled before last backlog check. Fixes: d41a69f1d390 ("tcp: make tcp_sendmsg() aware of socket backlog") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-04xen/evtchn: fix ring resize when binding new eventsJan Beulich
The copying of ring data was wrong for two cases: For a full ring nothing got copied at all (as in that case the canonicalized producer and consumer indexes are identical). And in case one or both of the canonicalized (after the resize) indexes would point into the second half of the buffer, the copied data ended up in the wrong (free) part of the new buffer. In both cases uninitialized data would get passed back to the caller. Fix this by simply copying the old ring contents twice: Once to the low half of the new buffer, and a second time to the high half. This addresses the inability to boot a HVM guest with 64 or more vCPUs. This regression was caused by 8620015499101090 (xen/evtchn: dynamically grow pending event channel ring). Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: <stable@vger.kernel.org> # 4.4+ Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2016-05-04intel_pstate: Fix intel_pstate_get()Rafael J. Wysocki
After commit 8fa520af5081 "intel_pstate: Remove freq calculation from intel_pstate_calc_busy()" intel_pstate_get() calls get_avg_frequency() to compute the average frequency, which is problematic for two reasons. First, intel_pstate_get() may be invoked before the driver reads the CPU feedback registers for the first time and if that happens, get_avg_frequency() will attempt to divide by zero. Second, the get_avg_frequency() call in intel_pstate_get() is racy with respect to intel_pstate_sample() and it may end up returning completely meaningless values for this reason. Moreover, after commit 7349ec0470b6 "intel_pstate: Move intel_pstate_calc_busy() into get_target_pstate_use_performance()" sample.core_pct_busy is never computed on Atom, but it is used in intel_pstate_adjust_busy_pstate() in that case too. To address those problems notice that if sample.core_pct_busy was used in the average frequency computation carried out by get_avg_frequency(), both the divide by zero problem and the race with respect to intel_pstate_sample() would be avoided. Accordingly, move the invocation of intel_pstate_calc_busy() from get_target_pstate_use_performance() to intel_pstate_update_util(), which also will take care of the uninitialized sample.core_pct_busy on Atom, and modify get_avg_frequency() to use sample.core_pct_busy as per the above. Reported-by: kernel test robot <ying.huang@linux.intel.com> Link: http://marc.info/?l=linux-kernel&m=146226437623173&w=4 Fixes: 8fa520af5081 "intel_pstate: Remove freq calculation from intel_pstate_calc_busy()" Fixes: 7349ec0470b6 "intel_pstate: Move intel_pstate_calc_busy() into get_target_pstate_use_performance()" Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2016-05-04mmc: sdhci-of-arasan: fix set_clock when a phy is supportedShawn Lin
commit 61b914eb81f8 ("mmc: sdhci-of-arasan: add phy support for sdhci-of-arasan") introduce phy support for arasan. According to the vendor's databook, we should make sure the phy is in poweroff status before we configure the clk stuff. Otherwise it may cause some IO sample timing issues from the test. And we don't need this extra operation while running in low performance mode since phy doesn't trigger sampling block. Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04mmc: omap: Use dma_request_chan() for requesting DMA channelPeter Ujfalusi
With the new dma_request_chan() the client driver does not need to look for the DMA resource and it does not need to pass filter_fn anymore. By switching to the new API the driver can now support deferred probing against DMA. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> CC: Ulf Hansson <ulf.hansson@linaro.org> CC: Jarkko Nikula <jarkko.nikula@bitmer.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04mmc: mmc: Attempt to flush cache before resetAdrian Hunter
CMD0 or hardware reset may invalidate the cache, so it needs to be flushed before reset. In the case of recovery, we can't expect flushing the cache to work always, but have a go and ignore errors. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04irqchip: Add Layerscape SCFG MSI controller supportMinghuan Lian
Some kind of Freescale Layerscape SoC provides a MSI implementation which uses two SCFG registers MSIIR and MSIR to support 32 MSI interrupts for each PCIe controller. The patch is to support it. Signed-off-by: Minghuan Lian <Minghuan.Lian@nxp.com> Tested-by: Alexander Stein <alexander.stein@systec-electronic.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-05-04dt/bindings: Add bindings for Layerscape SCFG MSIMinghuan Lian
Some Layerscape SoCs use a simple MSI controller implementation. It contains only two SCFG register to trigger and describe a group 32 MSI interrupts. The patch adds bindings to describe the controller. Signed-off-by: Minghuan Lian <Minghuan.Lian@nxp.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2016-05-04ima: fix the string representation of the LSM/IMA hook enumeration orderingMimi Zohar
This patch fixes the string representation of the LSM/IMA hook enumeration ordering used for displaying the IMA policy. Fixes: d9ddf077bb85 ("ima: support for kexec image and initramfs") Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Tested-by: Eric Richter <erichte@linux.vnet.ibm.com> Signed-off-by: James Morris <james.l.morris@oracle.com>
2016-05-04iio: imu: mpu6050: Fix name/chip_id when using ACPIDaniel Baluta
When using ACPI, id is NULL and the current code automatically defaults name to NULL and chip id to 0. We should instead use the data provided in the ACPI device table. Fixes: c816d9e7a57b ("iio: imu: mpu6050: fix possible NULL dereferences") Signed-off-by: Daniel Baluta <daniel.baluta@intel.com> Reviewed-By: Matt Ranostay <matt.ranostay@intel.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2016-05-04iio: imu: mpu6050: fix possible NULL dereferencesMatt Ranostay
Fix possible null dereferencing of i2c and spi driver data. Signed-off-by: Matt Ranostay <matt.ranostay@intel.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org>
2016-05-04mmc: sh_mobile_sdhi: check return value when changing clkWolfram Sang
And return the old clock rate if something went wrong. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04mmc: sh_mobile_sdhi: only change the clock on RCar Gen2+Wolfram Sang
We had a regression on r8a7740 where the SDHI clock was a generic peripheral clock, so changing its rate was not desired. This should be fixed in the clock driver. However, it also shows that the new clock calculation should only be used on tested systems. Add a check for that. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04mmc: tmio/sdhi: introduce flag for RCar 2+ specific featuresWolfram Sang
RCar Gen2 and later implementations of TMIO/SDHI have their own set of features and additions. FAST_CLK_CHG is just one of them and I see a few others being added soon. Some may work on older chipsets but this needs to be tested case by case. Instead of adding a bunch of flags for each feature, add a global RCar2+ one for now. We can still break out features if the need arises. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04mmc: sh_mobile_sdhi: make clk_update function more compactWolfram Sang
Save a few lines, the codebase is large enough. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04mmc: omap_hsmmc: Use dma_request_chan() for requesting DMA channelPeter Ujfalusi
With the new dma_request_chan() the client driver does not need to look for the DMA resource and it does not need to pass filter_fn anymore. By switching to the new API the driver can now support deferred probing against DMA. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04mmc: sdhci-of-at91: add presets setupLudovic Desroches
The controller claims to support SDR104. In fact, it only supports a degraded SDR104 since the maximum frequency of the SD clock is 120 MHz instead of 208 MHz. The sdhci core is unaware of it and will compute a wrong clock divider. We can deal with this specific case by using presets. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2016-05-04ixgbevf: Remove unused parameterTony Nguyen
ixgbevf_update_xcast_mode() is not using the netdev parameter; removing it since it's unnecessary. Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-05-04ixgbe: Disable DCB and FCoE for X550EM_x and x550em_aUsha Ketineni
This patch adds IXGBE_FLAG_DCB_CAPABLE flag that is set for all MACs other than X550EM_x and x550em_a. DCB and FCoE is disabled for these MACS. DCB initialization code is moved to a separate function. Signed-off-by: Usha Ketineni <usha.k.ketineni@intel.com> Tested-by: Ronald Bynoe <ronald.j.bynoe@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>