summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2013-09-20declance: Remove `incompatible pointer type' warningsMaciej W. Rozycki
Revert damage caused by 43d620c82985b19008d87a437b4cf83f356264f7: .../declance.c: In function 'cp_to_buf': .../declance.c:347: warning: assignment from incompatible pointer type .../declance.c:348: warning: assignment from incompatible pointer type .../declance.c: In function 'cp_from_buf': .../declance.c:406: warning: assignment from incompatible pointer type .../declance.c:407: warning: assignment from incompatible pointer type Also add a `const' qualifier where applicable and adjust formatting. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20phy/micrel: Add suspend/resume support to Micrel PHYsPatrice Vilchez
All supported Micrel PHYs implement the standard "power down" bit 11 of BMCR, so this patch adds support using the generic genphy_{suspend,resume} functions. Signed-off-by: Patrice Vilchez <patrice.vilchez@atmel.com> [b.brezillon@overkiz.com: adapt to newer kernel and generalize to other phys] Signed-off-by: Boris BREZILLON <b.brezillon@overkiz.com> [nicolas.ferre@atmel.com: commit message modification] Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: David J. Choi <david.choi@micrel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net_sched: htb: support of 64bit ratesEric Dumazet
HTB already can deal with 64bit rates, we only have to add two new attributes so that tc can use them to break the current 32bit ABI barrier. TCA_HTB_RATE64 : class rate (in bytes per second) TCA_HTB_CEIL64 : class ceil (in bytes per second) This allows us to setup HTB on 40Gbps links, as 32bit limit is actually ~34Gbps Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net_sched: add u64 rate to psched_ratecfg_precompute()Eric Dumazet
Add an extra u64 rate parameter to psched_ratecfg_precompute() so that some qdisc can opt-in for 64bit rates in the future, to overcome the ~34 Gbits limit. psched_ratecfg_getrate() reports a legacy structure to tc utility, so if actual rate is above the 32bit rate field, cap it to the 34Gbit limit. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net: ethernet: eth.c: removed checkpatch warnings and errorsAvinash Kumar
removed these checkpatch.pl warnings: net/ethernet/eth.c:61: WARNING: Use #include <linux/uaccess.h> instead of <asm/uaccess.h> net/ethernet/eth.c:136: WARNING: Prefer netdev_dbg(netdev, ... then dev_dbg(dev, ... then pr_debug(... to printk(KERN_DEBUG ... net/ethernet/eth.c:181: ERROR: space prohibited before that close parenthesis ')' Signed-off-by: Avinash Kumar <avi.kp.137@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net: cdc-phonet: Staticize usbpn_probeSachin Kamat
'usbpn_probe' is referenced only in this file. Make it static. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Acked-by: Rémi Denis-Courmont <remi@remlab.net> Cc: Rémi Denis-Courmont <remi@remlab.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net: cxgb4vf: Staticize local symbolsSachin Kamat
Local symbols used only in this file are made static. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Cc: Casey Leedom <leedom@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net: bnx2x: Staticize local symbolsSachin Kamat
Local symbols used only in this file are made static. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Cc: Eilon Greenstein <eilong@broadcom.com> Cc: Ariel Elior <ariele@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net: emaclite: Code cleanupMichal Simek
No function changes (s/\ \t/\t/g) Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20net: emaclite: Not necessary to call devm_iounmapMichal Simek
devm_iounmap is called automatically. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-09-20sfc: Support ARFS for IPv6 flowsBen Hutchings
Extend efx_filter_rfs() to map TCP/IPv6 and UDP/IPv6 flows into efx_filter_spec. These are only supported on EF10; on Falcon and Siena they will be rejected by efx_farch_filter_from_gen_spec(). Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Use TX PIO for sufficiently small packetsJon Cooper
Sufficiently small linear packets can be copied into the PIO buffer with a single call to memcpy_toio(). Non-linear packets require an intermediate cache-line-sized buffer. [bwh: I wrote the first version of this, but Jon did the hard work to handle non-linear packets.] Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Introduce inline functions to simplify TX insertionBen Hutchings
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Separate out queue-empty check from efx_nic_may_push_tx_desc()Ben Hutchings
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Allocate and link PIO buffers; map them with write-combiningBen Hutchings
Try to allocate a segment of PIO buffer to each TX channel. If allocation fails, log an error but continue. PIO buffers must be mapped separately from the NIC registers, with write-combining enabled. Where the host page size is 4K, we could potentially map each VI's registers and PIO buffer separately. However, this would add significant complexity, and we also need to support architectures such as POWER which have a greater page size. So make a single contiguous write-combining mapping after the uncacheable mapping, aligned to the host page size, and link PIO buffers there. Where necessary, allocate additional VIs within the write-combining mapping purely for access to PIO buffers. Link all TX buffers to TX queues and the additional VIs in efx_ef10_dimension_resources() and in efx_ef10_init_nic() after an MC reboot. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Implement firmware-assisted TSO for EF10Ben Hutchings
Segmentation remains in the driver, but we generate option descriptors describing the required packet editing rather than making our own copies. Reduce tso_state::ipv4_id to 16 bits, so it doesn't overflow into the TCP_FLAGS field of the option descriptor. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Fold tso_get_head_fragment() into tso_start()Ben Hutchings
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Add EF10 registers to register dumpBen Hutchings
There are very few readable registers, but we may as well report them. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: efx_ef10_filter_update_rx_scatter() can be staticFengguang Wu
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: efx_ethtool_get_ts_info() can be staticFengguang Wu
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20drm/radeon/cik: Fix printing of client name on VM protection faultMichel Dänzer
The string is encoded from the MSB to the LSB of the register. Cc: stable@vger.kernel.org Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-09-20drm/radeon: additional gcc fixes for radeon_atombios.cAlex Deucher
Newer versions of gcc seem to wander off into the weeds when dealing with variable sizes arrays in structs. Rather than indexing the arrays, use pointer arithmetic. Fix up spread spectrum tables. See bugs: https://bugs.freedesktop.org/show_bug.cgi?id=66932 https://bugs.freedesktop.org/show_bug.cgi?id=66972 https://bugs.freedesktop.org/show_bug.cgi?id=66945 Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-09-20drm/radeon: avoid UVD corruption on AGP cards using GPU gartAlex Deucher
If the user has forced the driver to use the internal GPU gart rather than AGP on an AGP card, force the buffers to vram as well. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de> Cc: stable@vger.kernel.org
2013-09-20sfc: Increase MCDI status timeout to 250msDaniel Pieczko
The SFC9120 MC firmware often takes longer than 20ms to reboot and update the warm boot count in BIU_MC_SFT_STATUS_REG. A timeout of 250ms is very generous for an MC reboot. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Wait for MC reboot to complete before scheduling driver resetDaniel Pieczko
Scheduling a reset following an MC reboot event before waiting for reboot to complete results in a race that can lead to a state where must_realloc_vis is false in efx_ef10_fini_dmaq() but the VIs have been destroyed during the MC reboot. To avoid MC errors when trying to remove VIs that do not exist, wait for the MC reboot to complete before scheduling the reset. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20lockref: use cmpxchg64 explicitly for lockless updatesWill Deacon
The cmpxchg() function tends not to support 64-bit arguments on 32-bit architectures. This could be either due to use of unsigned long arguments (like on ARM) or lack of instruction support (cmpxchgq on x86). However, these architectures may implement a specific cmpxchg64() function to provide 64-bit cmpxchg support instead. Since the lockref code requires a 64-bit cmpxchg and relies on the architecture selecting ARCH_USE_CMPXCHG_LOCKREF, move to using cmpxchg64 instead of cmpxchg and allow 32-bit architectures to make use of the lockless lockref implementation. Cc: Waiman Long <Waiman.Long@hp.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-20dm mpath: disable WRITE SAME if it failsMike Snitzer
Workaround the SCSI layer's problematic WRITE SAME heuristics by disabling WRITE SAME in the DM multipath device's queue_limits if an underlying device disabled it. The WRITE SAME heuristics, with both the original commit 5db44863b6eb ("[SCSI] sd: Implement support for WRITE SAME") and the updated commit 66c28f971 ("[SCSI] sd: Update WRITE SAME heuristics"), default to enabling WRITE SAME(10) even without successfully determining it is supported. After the first failed WRITE SAME the SCSI layer will disable WRITE SAME for the device (by setting sdkp->device->no_write_same which results in 'max_write_same_sectors' in device's queue_limits to be set to 0). When a device is stacked ontop of such a SCSI device any changes to that SCSI device's queue_limits do not automatically propagate up the stack. As such, a DM multipath device will not have its WRITE SAME support disabled. This causes the block layer to continue to issue WRITE SAME requests to the mpath device which causes paths to fail and (if mpath IO isn't configured to queue when no paths are available) it will result in actual IO errors to the upper layers. This fix doesn't help configurations that have additional devices stacked ontop of the mpath device (e.g. LVM created linear DM devices ontop). A proper fix that restacks all the queue_limits from the bottom of the device stack up will need to be explored if SCSI will continue to use this model of optimistically allowing op codes and then disabling them after they fail for the first time. Before this patch: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null) device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121) device-mapper: multipath: XXX snitm debugging: failing WRITE SAME IO with error=-121 end_request: critical target error, dev dm-6, sector 528 dm-6: WRITE SAME failed. Manually zeroing. device-mapper: multipath: Failing path 8:112. end_request: I/O error, dev dm-6, sector 4616 dm-6: WRITE SAME failed. Manually zeroing. end_request: I/O error, dev dm-6, sector 4616 end_request: I/O error, dev dm-6, sector 5640 end_request: I/O error, dev dm-6, sector 6664 end_request: I/O error, dev dm-6, sector 7688 end_request: I/O error, dev dm-6, sector 524288 Buffer I/O error on device dm-6, logical block 65536 lost page write due to I/O error on dm-6 JBD2: Error -5 detected when updating journal superblock for dm-6-8. end_request: I/O error, dev dm-6, sector 524296 Aborting journal on device dm-6-8. end_request: I/O error, dev dm-6, sector 524288 Buffer I/O error on device dm-6, logical block 65536 lost page write due to I/O error on dm-6 JBD2: Error -5 detected when updating journal superblock for dm-6-8. # cat /sys/block/sdh/queue/write_same_max_bytes 0 # cat /sys/block/dm-6/queue/write_same_max_bytes 33553920 After this patch: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null) device-mapper: multipath: XXX snitm debugging: got -EREMOTEIO (-121) device-mapper: multipath: XXX snitm debugging: WRITE SAME I/O failed with error=-121 end_request: critical target error, dev dm-6, sector 528 dm-6: WRITE SAME failed. Manually zeroing. # cat /sys/block/sdh/queue/write_same_max_bytes 0 # cat /sys/block/dm-6/queue/write_same_max_bytes 0 It should be noted that WRITE SAME support wasn't enabled in DM multipath until v3.10. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: stable@vger.kernel.org # 3.10+
2013-09-20dm-snapshot: fix performance degradation due to small hash sizeMikulas Patocka
LVM2, since version 2.02.96, creates origin with zero size, then loads the snapshot driver and then loads the origin. Consequently, the snapshot driver sees the origin size zero and sets the hash size to the lower bound 64. Such small hash table causes performance degradation. This patch changes it so that the hash size is determined by the size of snapshot volume, not minimum of origin and snapshot size. It doesn't make sense to set the snapshot size significantly larger than the origin size, so we do not need to take origin size into account when calculating the hash size. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2013-09-20dm snapshot: workaround for a false positive lockdep warningMikulas Patocka
The kernel reports a lockdep warning if a snapshot is invalidated because it runs out of space. The lockdep warning was triggered by commit 0976dfc1d0cd80a4e9dfaf87bd87 ("workqueue: Catch more locking problems with flush_work()") in v3.5. The warning is false positive. The real cause for the warning is that the lockdep engine treats different instances of md->lock as a single lock. This patch is a workaround - we use flush_workqueue instead of flush_work. This code path is not performance sensitive (it is called only on initialization or invalidation), thus it doesn't matter that we flush the whole workqueue. The real fix for the problem would be to teach the lockdep engine to treat different instances of md->lock as separate locks. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org # 3.5+
2013-09-20Merge branch 'pm-cpufreq'Rafael J. Wysocki
* pm-cpufreq: cpufreq: return EEXIST instead of EBUSY for second registering ARM: shmobile: change dev_id to cpu0 while registering cpu clock ARM: i.MX: change dev_id to cpu0 while registering cpu clock cpufreq: imx6q-cpufreq: assign cpu_dev correctly to cpu0 device cpufreq: cpufreq-cpu0: assign cpu_dev correctly to cpu0 device cpufreq: unlock correct rwsem while updating policy->cpu cpufreq: Clear policy->cpus bits in __cpufreq_remove_dev_finish()
2013-09-20Merge branch 'acpi-pci'Rafael J. Wysocki
* acpi-pci: PCI / ACPI / PM: Clear pme_poll for devices in D3cold on wakeup
2013-09-20Merge tag 'arm64-stable' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64 Pull ARM64 fixes from Catalin Marinas: - Compat register fault reporting fix - Documentation clarification on tagged pointers - hwcap widened to 64-bit (user space already reading it as 64-bit) * tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: arm64: Widen hwcap to be 64 bit arm64: Correctly report LR and SP for compat tasks arm64: documentation: tighten up tagged pointer documentation arm64: Make do_bad_area() function static
2013-09-20sched/balancing: Fix cfs_rq->task_h_load calculationVladimir Davydov
Patch a003a2 (sched: Consider runnable load average in move_tasks()) sets all top-level cfs_rqs' h_load to rq->avg.load_avg_contrib, which is always 0. This mistype leads to all tasks having weight 0 when load balancing in a cpu-cgroup enabled setup. There obviously should be sum of weights of all runnable tasks there instead. Fix it. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Reviewed-by: Paul Turner <pjt@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1379173186-11944-1-git-send-email-vdavydov@parallels.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-20sched/balancing: Fix 'local->avg_load > busiest->avg_load' case in ↵Vladimir Davydov
fix_small_imbalance() In busiest->group_imb case we can come to fix_small_imbalance() with local->avg_load > busiest->avg_load. This can result in wrong imbalance fix-up, because there is the following check there where all the members are unsigned: if (busiest->avg_load - local->avg_load + scaled_busy_load_per_task >= (scaled_busy_load_per_task * imbn)) { env->imbalance = busiest->load_per_task; return; } As a result we can end up constantly bouncing tasks from one cpu to another if there are pinned tasks. Fix it by substituting the subtraction with an equivalent addition in the check. [ The bug can be caught by running 2*N cpuhogs pinned to two logical cpus belonging to different cores on an HT-enabled machine with N logical cpus: just look at se.nr_migrations growth. ] Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/ef167822e5c5b2d96cf5b0e3e4f4bdff3f0414a2.1379252740.git.vdavydov@parallels.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-20sched/balancing: Fix 'local->avg_load > sds->avg_load' case in ↵Vladimir Davydov
calculate_imbalance() In busiest->group_imb case we can come to calculate_imbalance() with local->avg_load >= busiest->avg_load >= sds->avg_load. This can result in imbalance overflow, because it is calculated as follows env->imbalance = min( max_pull * busiest->group_power, (sds->avg_load - local->avg_load) * local->group_power) / SCHED_POWER_SCALE; As a result we can end up constantly bouncing tasks from one cpu to another if there are pinned tasks. Fix this by skipping the assignment and assuming imbalance=0 in case local->avg_load > sds->avg_load. [ The bug can be caught by running 2*N cpuhogs pinned to two logical cpus belonging to different cores on an HT-enabled machine with N logical cpus: just look at se.nr_migrations growth. ] Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/8f596cc6bc0e5e655119dc892c9bfcad26e971f4.1379252740.git.vdavydov@parallels.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-20arm64: Widen hwcap to be 64 bitSteve Capper
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit. This patch widens elf_hwcap to be 64 bit. Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-20arm64: Correctly report LR and SP for compat tasksCatalin Marinas
When a task crashes and we print debugging information, ensure that compat tasks show the actual AArch32 LR and SP registers rather than the AArch64 ones. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-20arm64: documentation: tighten up tagged pointer documentationWill Deacon
Commit d50240a5f6ce ("arm64: mm: permit use of tagged pointers at EL0") added support for tagged pointers in userspace, but the corresponding update to Documentation/ contained some imprecise statements. This patch fixes up some minor ambiguities in the text, hopefully making it more clear about exactly what the kernel expects from user virtual addresses. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-20arm64: Make do_bad_area() function staticCatalin Marinas
This function is only called from arch/arm64/mm/fault.c. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-20Merge tag 'efi-urgent' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into x86/urgent Pull EFI fix from Matt Flemin: " * Fix WARNING on i386 by only enabling a workaround for x86-64 since we've never encountered the bug on i386 - Josh Boyer " Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-20perf: Fix capabilities bitfield compatibility in 'struct perf_event_mmap_page'Peter Zijlstra
Solve the problems around the broken definition of perf_event_mmap_page:: cap_usr_time and cap_usr_rdpmc fields which used to overlap, partially fixed by: 860f085b74e9 ("perf: Fix broken union in 'struct perf_event_mmap_page'") The problem with the fix (merged in v3.12-rc1 and not yet released officially), noticed by Vince Weaver is that the new behavior is not detectable by new user-space, and that due to the reuse of the field names it's easy to mis-compile a binary if old headers are used on a new kernel or new headers are used on an old kernel. To solve all that make this change explicit, detectable and self-contained, by iterating the ABI the following way: - Always clear bit 0, and rename it to usrpage->cap_bit0, to at least not confuse old user-space binaries. RDPMC will be marked as unavailable to old binaries but that's within the ABI, this is a capability bit. - Rename bit 1 to ->cap_bit0_is_deprecated and always set it to 1, so new libraries can reliably detect that bit 0 is deprecated and perma-zero without having to check the kernel version. - Use bits 2, 3, 4 for the newly defined, correct functionality: cap_user_rdpmc : 1, /* The RDPMC instruction can be used to read counts */ cap_user_time : 1, /* The time_* fields are used */ cap_user_time_zero : 1, /* The time_zero field is used */ - Rename all the bitfield names in perf_event.h to be different from the old names, to make sure it's not possible to mis-compile it accidentally with old assumptions. The 'size' field can then be used in the future to add new fields and it will act as a natural ABI version indicator as well. Also adjust tools/perf/ userspace for the new definitions, noticed by Adrian Hunter. Reported-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Also-Fixed-by: Adrian Hunter <adrian.hunter@intel.com> Link: http://lkml.kernel.org/n/tip-zr03yxjrpXesOzzupszqglbv@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-20ath10k: use msdu headroom to store txfragMichal Kazior
Instead of allocating sk_buff for a mere 16-byte tx fragment list buffer use headroom of the original msdu sk_buff. This decreases CPU cache pressure and improves performance. Measured improvement on AP135 is 560mbps -> 590mbps of UDP TX briding traffic. Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
2013-09-20ath10k: cleanup HTT TX functionsMichal Kazior
Use a saner goto scheme for failure handling. Also group operations more sensibly. Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
2013-09-20ath10k: decouple HTT TX completionsMichal Kazior
Until now the all MSDU transfer related structures were freed when all resources were unreferenced. Now HTC transfer is freed independently and HTT transfer is so too. This yields a way more simpler ath10k_skb_cb and should possibly enable parallel pipe processing (which is now serialized in ath10k_pci_process_ce routine). Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
2013-09-20ath10k: avoid needless memset on TX pathMichal Kazior
This reduces number of memory accesses and hopefully contributes to better performance in the future. Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
2013-09-20ath10k: use num_pending_tx instead of msdu id bitmapMichal Kazior
It's more efficient to simply check num_pending_tx value instead of traversing whole bitmap of msdu ids. Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
2013-09-20ath10k: fix num_sends_allowed replenishingMichal Kazior
Commit e9bb0aa39 ("ath10k: delete struct ce_sendlist") broke num_sends_allowed incrementing. num_sends_allowed exceeded initial values and could overflow. This code was supposed to replenish num_sends_allowed for partial sendlist items (i.e. before final sendlist item from a sendlist was completed and could be processed by completion handlers). Fortunately it seems it did not cause any major breakage, yet. Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
2013-09-20ath10k: fix tracing build for ath10k_wmi_cmdMichal Kazior
Commit be8b394390 ("ath10k: make WMI commands block by design") broke the build if CONFIG_ATH10K_TRACING was enabled. Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Michal Kazior <michal.kazior@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
2013-09-20perf/x86/intel/uncore: Don't use smp_processor_id() in validate_group()Yan, Zheng
uncore_validate_group() can't call smp_processor_id() because it is in preemptible context. Pass NUMA_NO_NODE to the allocator instead. Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1379400493-11505-1-git-send-email-zheng.z.yan@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-20perf: Update ABI commentPeter Zijlstra
For some mysterious reason the sample_id field of PERF_RECORD_MMAP went AWOL. Reported-by: Vince Weaver <vince@deater.net> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>