summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2013-10-09drm/radeon: improve soft reset on CIKAlex Deucher
Disable CG/PG before resetting. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-10-09drm/radeon: improve soft reset on SIAlex Deucher
Disable CG/PG and stop the rlc before resetting. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-10-09drm/radeon/dpm: off by one in si_set_mc_special_registers()Dan Carpenter
These checks should be ">=" instead of ">". j is used as an offset into the table->mc_reg_address[] array and that has SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE (16) elements. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-10-09drm/radeon/dpm/btc: off by one in btc_set_mc_special_registers()Dan Carpenter
It should be ">=" instead of ">" here. The table->mc_reg_address[] array has SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE (16) elements. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-10-09drm/radeon: forever loop on error in radeon_do_test_moves()Dan Carpenter
The error path does this: for (--i; i >= 0; --i) { which is a forever loop because "i" is unsigned. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-10-09drm/radeon: fix hw contexts for SUMO2 asicswojciech kapuscinski
They have 4 rather than 8. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=63599 Signed-off-by: wojciech kapuscinski <wojtask9@wp.pl> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-10-09drm/radeon: fix typo in CP DMA register headersAlex Deucher
Wrong bit offset for SRC endian swapping. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-10-09drm/radeon/dpm: disable multiple UVD statesAlex Deucher
Always use the regular UVD state for now. This fixes a performance regression with UVD playback on certain APUs. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-10-09drm/radeon: use hw generated CTS/N values for audioAlex Deucher
Use the hw generated values rather than calculating them in the driver. There may be some older r6xx asics where this doesn't work correctly. This remains to be seen. See bug: https://bugs.freedesktop.org/show_bug.cgi?id=69675 Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-10-09drm/radeon: fix N/CTS clock matching for audioAlex Deucher
The drm code that calculates the 1001 clocks rounds up rather than truncating. This allows the table to match properly on those modes. See bug: https://bugs.freedesktop.org/show_bug.cgi?id=69675 Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-10-09drm/radeon: use 64-bit math to calculate CTS values for audio (v2)Alex Deucher
Avoid losing precision. See bug: https://bugs.freedesktop.org/show_bug.cgi?id=69675 v2: fix math as per Anssi's comments. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-10-09drm/edid: catch kmalloc failure in drm_edid_to_speaker_allocationAlex Deucher
Return -ENOMEM if the allocation fails. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Jani Nikula <jani.nikula@intel.com>
2013-10-10Revert "drm/fb-helper: don't sleep for screen unblank when an oops is in ↵Dave Airlie
progress" This reverts commit 928c2f0c006bf7f381f58af2b2786d2a858ae311. This patch double applied, two checks for the price of one. Signed-off-by: Dave Airlie <airlied@redhat.com>
2013-10-09can: at91-can: fix device to driver data mapping for platform devicesMarc Kleine-Budde
In commit: 3078cde7 can: at91_can: add dt support device tree support was added to the at91_can driver. In this commit the mapping of device to driver data was mixed up. This results in the sam9x5 parameters being used for the sam9263 and the workaround for the broken mailbox 0 on the sam9263 not being activated. This patch fixes the broken platform_device_id table. Cc: linux-stable <stable@vger.kernel.org> Cc: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2013-10-09can: flexcan: fix mx28 detection by rearanging OF match tableMarc Kleine-Budde
The current implemetation of of_match_device() relies that the of_device_id table in the driver is sorted from most specific to least specific compatible. Without this patch the mx28 is detected as the less specific p1010. This leads to a p1010 specific workaround is activated on the mx28, which is not needed. Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2013-10-09can: flexcan: flexcan_chip_start: fix regression, mark one MB for TX and ↵Marc Kleine-Budde
abort pending TX In patch 0d1862e can: flexcan: fix flexcan_chip_start() on imx6 the loop in flexcan_chip_start() that iterates over all mailboxes after the soft reset of the CAN core was removed. This loop put all mailboxes (even the ones marked as reserved 1...7) into EMPTY/INACTIVE mode. On mailboxes 8...63, this aborts any pending TX messages. After a cold boot there is random garbage in the mailboxes, which leads to spontaneous transmit of CAN frames during first activation. Further if the interface was disabled with a pending message (usually due to an error condition on the CAN bus), this message is retransmitted after enabling the interface again. This patch fixes the regression by: 1) Limiting the maximum number of used mailboxes to 8, 0...7 are used by the RX FIFO, 8 is used by TX. 2) Marking the TX mailbox as EMPTY/INACTIVE, so that any pending TX of that mailbox is aborted. Cc: linux-stable <stable@vger.kernel.org> Cc: Lothar Waßmann <LW@KARO-electronics.de> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2013-10-09Merge branch 'for-davem' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless John W. Linville says: =================== Please pull this batch of fixes intended for 3.12... Most of the bits are for iwlwifi -- Johannes says: "I have a fix for WoWLAN/D3, a PCIe device fix, we're removing a warning, there's a fix for RF-kill while scanning (which goes together with a mac80211 fix) and last but not least we have many new PCI IDs." Also for iwlwifi is a patch from Johannes to correct some merge damage that crept into the tree before the last merge window. On top of that, Felix Fietkau sends an ath9k patch to avoid a Tx scheduling hang when changing channels to do a scan. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-09Merge branch 'gianfar'David S. Miller
Merge in gianfar driver bug fixes from Claudiu Manoil. Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-09gianfar: Enable eTSEC-20 erratum w/a for P2020 Rev1Claudiu Manoil
Enable workaround for P2020/P2010 erratum eTSEC 20, "Excess delays when transmitting TOE=1 large frames". The impact is that frames lager than 2500-bytes for which TOE (i.e. TCP/IP hw accelerations like Tx csum) is enabled may see excess delay before start of transmission. This erratum was fixed in Rev 2.0. Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-09gianfar: Use mpc85xx support for errata detectionClaudiu Manoil
Use the macros and defines from mpc85xx.h to simplify and prevent errors in identifying a mpc85xx based SoC for errata detection. This should help enabling (and identifying) workarounds for various mpc85xx based chips and revisions. For instance, express MPC8548 Rev.2 as: (SVR_SOC_VER(svr) == SVR_8548) && (SVR_REV(svr) == 0x20) instead of: (pvr == 0x80210020 && mod == 0x8030 && rev == 0x0020) Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-09gianfar: Enable eTSEC-A002 erratum w/a for all partsClaudiu Manoil
A002 is still in "no plans to fix" state, and applies to all the current P1/P2 parts as well, so it's resonable to enable its workaround by default, for all the soc's with etsec. The impact of not enabling this workaround for affected parts is that under certain conditons (runt frames or even frames with RX error detected at PHY level) during controller reset, the controller might fail to indicate Rx reset (GRS) completion. Signed-off-by: Claudiu Manoil <claudiu.manoil@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-09Merge branch 'master' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec Steffen Klassert says: ==================== 1) We used the wrong netlink attribute to verify the lenght of the replay window on async events. Fix this by using the right netlink attribute. 2) Policy lookups can not match the output interface on forwarding. Add the needed informations to the flow informations. 3) We update the pmtu when we receive a ICMPV6_DEST_UNREACH message on IPsec with ipv6. This is wrong and leads to strange fragmented packets, only ICMPV6_PKT_TOOBIG messages should update the pmtu. Fix this by removing the ICMPV6_DEST_UNREACH check from the IPsec protocol error handlers. 4) The legacy IPsec anti replay mechanism supports anti replay windows up to 32 packets. If a user requests for a bigger anti replay window, we use 32 packets but pretend that we use the requested window size. Fix from Fan Du. 5) If asynchronous events are enabled and replay_maxdiff is set to zero, we generate an async event for every received packet instead of checking whether a timeout occurred. Fix from Thomas Egerer. 6) Policies need a refcount when the state resolution timer is armed. Otherwise the timer can fire after the policy is deleted. 7) We might dreference a NULL pointer if the hold_queue is empty, add a check to avoid this. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-09net: secure_seq: Fix warning when CONFIG_IPV6 and CONFIG_INET are not selectedFabio Estevam
net_secret() is only used when CONFIG_IPV6 or CONFIG_INET are selected. Building a defconfig with both of these symbols unselected (Using the ARM at91sam9rl_defconfig, for example) leads to the following build warning: $ make at91sam9rl_defconfig # # configuration written to .config # $ make net/core/secure_seq.o scripts/kconfig/conf --silentoldconfig Kconfig CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h make[1]: `include/generated/mach-types.h' is up to date. CALL scripts/checksyscalls.sh CC net/core/secure_seq.o net/core/secure_seq.c:17:13: warning: 'net_secret_init' defined but not used [-Wunused-function] Fix this warning by protecting the definition of net_secret() with these symbols. Reported-by: Olof Johansson <olof@lixom.net> Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-10-09hwmon: (applesmc) Always read until end of dataHenrik Rydberg
The crash reported and investigated in commit 5f4513 turned out to be caused by a change to the read interface on newer (2012) SMCs. Tests by Chris show that simply reading the data valid line is enough for the problem to go away. Additional tests show that the newer SMCs no longer wait for the number of requested bytes, but start sending data right away. Apparently the number of bytes to read is no longer specified as before, but instead found out by reading until end of data. Failure to read until end of data confuses the state machine, which eventually causes the crash. As a remedy, assuming bit0 is the read valid line, make sure there is nothing more to read before leaving the read function. Tested to resolve the original problem, and runtested on MBA3,1, MBP4,1, MBP8,2, MBP10,1, MBP10,2. The patch seems to have no effect on machines before 2012. Tested-by: Chris Murphy <chris@cmurf.com> Cc: stable@vger.kernel.org Signed-off-by: Henrik Rydberg <rydberg@euromail.se> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2013-10-09cfg80211: don't add p2p device while in RFKILLEmmanuel Grumbach
Since P2P device doesn't have a netdev associated to it, we cannot prevent the user to start it when in RFKILL. So refuse to even add it when in RFKILL. Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-10-09mac80211: correctly close cancelled scansEmmanuel Grumbach
__ieee80211_scan_completed is called from a worker. This means that the following flow is possible. * driver calls ieee80211_scan_completed * mac80211 cancels the scan (that is already complete) * __ieee80211_scan_completed runs When scan_work will finally run, it will see that the scan hasn't been aborted and might even trigger another scan on another band. This leads to a situation where cfg80211's scan is not done and no further scan can be issued. Fix this by setting a new flag when a HW scan is being cancelled so that no other scan will be triggered. Cc: stable@vger.kernel.org Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2013-10-09Merge branch 'master' of ↵John W. Linville
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless into for-davem
2013-10-09sched/numa: Reflow task_numa_group() to avoid a compiler warningPeter Zijlstra
Reflow the function a bit because GCC gets confused: kernel/sched/fair.c: In function ‘task_numa_fault’: kernel/sched/fair.c:1448:3: warning: ‘my_grp’ may be used uninitialized in this function [-Wmaybe-uninitialized] kernel/sched/fair.c:1463:27: note: ‘my_grp’ was declared here Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-6ebt6x7u64pbbonq1khqu2z9@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Retry task_numa_migrate() periodicallyRik van Riel
Short spikes of CPU load can lead to a task being migrated away from its preferred node for temporary reasons. It is important that the task is migrated back to where it belongs, in order to avoid migrating too much memory to its new location, and generally disturbing a task's NUMA location. This patch fixes NUMA placement for 4 specjbb instances on a 4 node system. Without this patch, things take longer to converge, and processes are not always completely on their own node. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-64-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Use unsigned longs for numa group fault statsMel Gorman
As Peter says "If you're going to hold locks you can also do away with all that atomic_long_*() nonsense". Lock aquisition moved slightly to protect the updates. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-63-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Skip some page migrations after a shared faultRik van Riel
Shared faults can lead to lots of unnecessary page migrations, slowing down the system, and causing private faults to hit the per-pgdat migration ratelimit. This patch adds sysctl numa_balancing_migrate_deferred, which specifies how many shared page migrations to skip unconditionally, after each page migration that is skipped because it is a shared fault. This reduces the number of page migrations back and forth in shared fault situations. It also gives a strong preference to the tasks that are already running where most of the memory is, and to moving the other tasks to near the memory. Testing this with a much higher scan rate than the default still seems to result in fewer page migrations than before. Memory seems to be somewhat better consolidated than previously, with multi-instance specjbb runs on a 4 node system. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-62-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09mm: numa: Revert temporarily disabling of NUMA migrationRik van Riel
With the scan rate code working (at least for multi-instance specjbb), the large hammer that is "sched: Do not migrate memory immediately after switching node" can be replaced with something smarter. Revert temporarily migration disabling and all traces of numa_migrate_seq. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-61-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Remove the numa_balancing_scan_period_reset sysctlMel Gorman
With scan rate adaptions based on whether the workload has properly converged or not there should be no need for the scan period reset hammer. Get rid of it. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-60-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Adjust scan rate in task_numa_placementRik van Riel
Adjust numa_scan_period in task_numa_placement, depending on how much useful work the numa code can do. The more local faults there are in a given scan window the longer the period (and hence the slower the scan rate) during the next window. If there are excessive shared faults then the scan period will decrease with the amount of scaling depending on whether the ratio of shared/private faults. If the preferred node changes then the scan rate is reset to recheck if the task is properly placed. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-59-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Take false sharing into account when adapting scan rateMel Gorman
Scan rate is altered based on whether shared/private faults dominated. task_numa_group() may detect false sharing but that information is not taken into account when adapting the scan rate. Take it into account. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-58-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Be more careful about joining numa groupsRik van Riel
Due to the way the pid is truncated, and tasks are moved between CPUs by the scheduler, it is possible for the current task_numa_fault to group together tasks that do not actually share memory together. This patch adds a few easy sanity checks to task_numa_fault, joining tasks together if they share the same tsk->mm, or if the fault was on a page with an elevated mapcount, in a shared VMA. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-57-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Avoid migrating tasks that are placed on their preferred nodePeter Zijlstra
This patch classifies scheduler domains and runqueues into types depending the number of tasks that are about their NUMA placement and the number that are currently running on their preferred node. The types are regular: There are tasks running that do not care about their NUMA placement. remote: There are tasks running that care about their placement but are currently running on a node remote to their ideal placement all: No distinction To implement this the patch tracks the number of tasks that are optimally NUMA placed (rq->nr_preferred_running) and the number of tasks running that care about their placement (nr_numa_running). The load balancer uses this information to avoid migrating idea placed NUMA tasks as long as better options for load balancing exists. For example, it will not consider balancing between a group whose tasks are all perfectly placed and a group with remote tasks. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1381141781-10992-56-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Fix task or group comparisonRik van Riel
This patch separately considers task and group affinities when searching for swap candidates during NUMA placement. If tasks are part of the same group, or no group at all, the task weights are considered. Some hysteresis is added to prevent tasks within one group from getting bounced between NUMA nodes due to tiny differences. If tasks are part of different groups, the code compares group weights, in order to favor grouping task groups together. The patch also changes the group weight multiplier to be the same as the task weight multiplier, since the two are no longer added up like before. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-55-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Decide whether to favour task or group weights based on swap ↵Rik van Riel
candidate relationships This patch separately considers task and group affinities when searching for swap candidates during task NUMA placement. If tasks are not part of a group or the same group then the task weights are considered. Otherwise the group weights are compared. Signed-off-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-54-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Add debuggingIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: http://lkml.kernel.org/r/1381141781-10992-53-git-send-email-mgorman@suse.de
2013-10-09sched/numa: Prevent parallel updates to group stats during placementMel Gorman
Having multiple tasks in a group go through task_numa_placement simultaneously can lead to a task picking a wrong node to run on, because the group stats may be in the middle of an update. This patch avoids parallel updates by holding the numa_group lock during placement decisions. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-52-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Call task_numa_free() from do_execve()Rik van Riel
It is possible for a task in a numa group to call exec, and have the new (unrelated) executable inherit the numa group association from its former self. This has the potential to break numa grouping, and is trivial to fix. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-51-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Use group fault statistics in numa placementMel Gorman
This patch uses the fraction of faults on a particular node for both task and group, to figure out the best node to place a task. If the task and group statistics disagree on what the preferred node should be then a full rescan will select the node with the best combined weight. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-50-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Stay on the same node if CLONE_VMRik van Riel
A newly spawned thread inside a process should stay on the same NUMA node as its parent. This prevents processes from being "torn" across multiple NUMA nodes every time they spawn a new thread. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-49-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09mm: numa: Do not batch handle PMD pagesMel Gorman
With the THP migration races closed it is still possible to occasionally see corruption. The problem is related to handling PMD pages in batch. When a page fault is handled it can be assumed that the page being faulted will also be flushed from the TLB. The same flushing does not happen when handling PMD pages in batch. Fixing is straight forward but there are a number of reasons not to 1. Multiple TLB flushes may have to be sent depending on what pages get migrated 2. The handling of PMDs in batch means that faults get accounted to the task that is handling the fault. While care is taken to only mark PMDs where the last CPU and PID match it can still have problems due to PID truncation when matching PIDs. 3. Batching on the PMD level may reduce faults but setting pmd_numa requires taking a heavy lock that can contend with THP migration and handling the fault requires the release/acquisition of the PTL for every page migrated. It's still pretty heavy. PMD batch handling is not something that people ever have been happy with. This patch removes it and later patches will deal with the additional fault overhead using more installigent migrate rate adaption. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-48-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09mm: numa: Do not group on RO pagesPeter Zijlstra
And here's a little something to make sure not the whole world ends up in a single group. As while we don't migrate shared executable pages, we do scan/fault on them. And since everybody links to libc, everybody ends up in the same group. Suggested-by: Rik van Riel <riel@redhat.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1381141781-10992-47-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09mm: numa: Copy cpupid on page migrationRik van Riel
After page migration, the new page has the nidpid unset. This makes every fault on a recently migrated page look like a first numa fault, leading to another page migration. Copying over the nidpid at page migration time should prevent erroneous migrations of recently migrated pages. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-46-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Report a NUMA task group IDMel Gorman
It is desirable to model from userspace how the scheduler groups tasks over time. This patch adds an ID to the numa_group and reports it via /proc/PID/status. Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-45-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09sched/numa: Use {cpu, pid} to create task groups for shared faultsPeter Zijlstra
While parallel applications tend to align their data on the cache boundary, they tend not to align on the page or THP boundary. Consequently tasks that partition their data can still "false-share" pages presenting a problem for optimal NUMA placement. This patch uses NUMA hinting faults to chain tasks together into numa_groups. As well as storing the NID a task was running on when accessing a page a truncated representation of the faulting PID is stored. If subsequent faults are from different PIDs it is reasonable to assume that those two tasks share a page and are candidates for being grouped together. Note that this patch makes no scheduling decisions based on the grouping information. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1381141781-10992-44-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-10-09mm: numa: Change page last {nid,pid} into {cpu,pid}Peter Zijlstra
Change the per page last fault tracking to use cpu,pid instead of nid,pid. This will allow us to try and lookup the alternate task more easily. Note that even though it is the cpu that is store in the page flags that the mpol_misplaced decision is still based on the node. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1381141781-10992-43-git-send-email-mgorman@suse.de [ Fixed build failure on 32-bit systems. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>