Age | Commit message (Collapse) | Author |
|
When comparing two nodes at a distance of 'hoplimit', we should consider
nodes only up to 'hoplimit'. Currently we also consider nodes at 'oplimit'
distance too. Hence two nodes at a distance of 'hoplimit' will have same
groupweight. Fix this by skipping nodes at hoplimit.
Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
16 25375.3 25308.6 -0.26
1 72617 72964 0.477
Running SPECjbb2005 on a 16 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
8 113372 108750 -4.07684
1 177403 183115 3.21979
(numbers from v1 based on v4.17-rc5)
Testcase Time: Min Max Avg StdDev
numa01.sh Real: 478.45 565.90 515.11 30.87
numa01.sh Sys: 207.79 271.04 232.94 21.33
numa01.sh User: 39763.93 47303.12 43210.73 2644.86
numa02.sh Real: 60.00 61.46 60.78 0.49
numa02.sh Sys: 15.71 25.31 20.69 3.42
numa02.sh User: 5175.92 5265.86 5235.97 32.82
numa03.sh Real: 776.42 834.85 806.01 23.22
numa03.sh Sys: 114.43 128.75 121.65 5.49
numa03.sh User: 60773.93 64855.25 62616.91 1576.39
numa04.sh Real: 456.93 511.95 482.91 20.88
numa04.sh Sys: 178.09 460.89 356.86 94.58
numa04.sh User: 36312.09 42553.24 39623.21 2247.96
numa05.sh Real: 393.98 493.48 436.61 35.59
numa05.sh Sys: 164.49 329.15 265.87 61.78
numa05.sh User: 33182.65 36654.53 35074.51 1187.71
Testcase Time: Min Max Avg StdDev %Change
numa01.sh Real: 414.64 819.20 556.08 147.70 -7.36%
numa01.sh Sys: 77.52 205.04 139.40 52.05 67.10%
numa01.sh User: 37043.24 61757.88 45517.48 9290.38 -5.06%
numa02.sh Real: 60.80 63.32 61.63 0.88 -1.37%
numa02.sh Sys: 17.35 39.37 25.71 7.33 -19.5%
numa02.sh User: 5213.79 5374.73 5268.90 55.09 -0.62%
numa03.sh Real: 780.09 948.64 831.43 63.02 -3.05%
numa03.sh Sys: 104.96 136.92 116.31 11.34 4.591%
numa03.sh User: 60465.42 73339.78 64368.03 4700.14 -2.72%
numa04.sh Real: 412.60 681.92 521.29 96.64 -7.36%
numa04.sh Sys: 210.32 314.10 251.77 37.71 41.74%
numa04.sh User: 34026.38 45581.20 38534.49 4198.53 2.825%
numa05.sh Real: 394.79 439.63 411.35 16.87 6.140%
numa05.sh Sys: 238.32 330.09 292.31 38.32 -9.04%
numa05.sh User: 33456.45 34876.07 34138.62 609.45 2.741%
While there is a regression with this change, this change is needed from a
correctness perspective. Also it helps consolidation as seen from perf bench
output.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-8-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Fix the order in which the private and shared numa faults are getting
printed.
No functional changes.
Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
16 25215.7 25375.3 0.63
1 72107 72617 0.70
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-7-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
When numa_group faults are available, task_numa_placement only uses
numa_group faults to evaluate preferred node. However it still accounts
task faults and even evaluates the preferred node just based on task
faults just to discard it in favour of preferred node chosen on the
basis of numa_group.
Instead use task faults only if numa_group is not set.
Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
16 25549.6 25215.7 -1.30
1 73190 72107 -1.47
Running SPECjbb2005 on a 16 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
8 113437 113372 -0.05
1 196130 177403 -9.54
(numbers from v1 based on v4.17-rc5)
Testcase Time: Min Max Avg StdDev
numa01.sh Real: 506.35 794.46 599.06 104.26
numa01.sh Sys: 150.37 223.56 195.99 24.94
numa01.sh User: 43450.69 61752.04 49281.50 6635.33
numa02.sh Real: 60.33 62.40 61.31 0.90
numa02.sh Sys: 18.12 31.66 24.28 5.89
numa02.sh User: 5203.91 5325.32 5260.29 49.98
numa03.sh Real: 696.47 853.62 745.80 57.28
numa03.sh Sys: 85.68 123.71 97.89 13.48
numa03.sh User: 55978.45 66418.63 59254.94 3737.97
numa04.sh Real: 444.05 514.83 497.06 26.85
numa04.sh Sys: 230.39 375.79 316.23 48.58
numa04.sh User: 35403.12 41004.10 39720.80 2163.08
numa05.sh Real: 423.09 460.41 439.57 13.92
numa05.sh Sys: 287.38 480.15 369.37 68.52
numa05.sh User: 34732.12 38016.80 36255.85 1070.51
Testcase Time: Min Max Avg StdDev %Change
numa01.sh Real: 478.45 565.90 515.11 30.87 16.29%
numa01.sh Sys: 207.79 271.04 232.94 21.33 -15.8%
numa01.sh User: 39763.93 47303.12 43210.73 2644.86 14.04%
numa02.sh Real: 60.00 61.46 60.78 0.49 0.871%
numa02.sh Sys: 15.71 25.31 20.69 3.42 17.35%
numa02.sh User: 5175.92 5265.86 5235.97 32.82 0.464%
numa03.sh Real: 776.42 834.85 806.01 23.22 -7.47%
numa03.sh Sys: 114.43 128.75 121.65 5.49 -19.5%
numa03.sh User: 60773.93 64855.25 62616.91 1576.39 -5.36%
numa04.sh Real: 456.93 511.95 482.91 20.88 2.930%
numa04.sh Sys: 178.09 460.89 356.86 94.58 -11.3%
numa04.sh User: 36312.09 42553.24 39623.21 2247.96 0.246%
numa05.sh Real: 393.98 493.48 436.61 35.59 0.677%
numa05.sh Sys: 164.49 329.15 265.87 61.78 38.92%
numa05.sh User: 33182.65 36654.53 35074.51 1187.71 3.368%
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-6-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Currently preferred node is set to dst_nid which is the last node in the
iteration whose group weight or task weight is greater than the current
node. However it doesn't guarantee that dst_nid has the numa capacity
to move. It also doesn't guarantee that dst_nid has the best_cpu which
is the CPU/node ideal for node migration.
Lets consider faults on a 4 node system with group weight numbers
in different nodes being in 0 < 1 < 2 < 3 proportion. Consider the task
is running on 3 and 0 is its preferred node but its capacity is full.
Consider nodes 1, 2 and 3 have capacity. Then the task should be
migrated to node 1. Currently the task gets moved to node 2. env.dst_nid
points to the last node whose faults were greater than current node.
Modify to set the preferred node based of best_cpu. Earlier setting
preferred node was skipped if nr_active_nodes is 1. This could result in
the task being moved out of the preferred node to a random node during
regular load balancing.
Also while modifying task_numa_migrate(), use sched_setnuma to set
preferred node. This ensures out numa accounting is correct.
Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
16 25122.9 25549.6 1.698
1 73850 73190 -0.89
Running SPECjbb2005 on a 16 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
8 105930 113437 7.08676
1 178624 196130 9.80047
(numbers from v1 based on v4.17-rc5)
Testcase Time: Min Max Avg StdDev
numa01.sh Real: 435.78 653.81 534.58 83.20
numa01.sh Sys: 121.93 187.18 145.90 23.47
numa01.sh User: 37082.81 51402.80 43647.60 5409.75
numa02.sh Real: 60.64 61.63 61.19 0.40
numa02.sh Sys: 14.72 25.68 19.06 4.03
numa02.sh User: 5210.95 5266.69 5233.30 20.82
numa03.sh Real: 746.51 808.24 780.36 23.88
numa03.sh Sys: 97.26 108.48 105.07 4.28
numa03.sh User: 58956.30 61397.05 60162.95 1050.82
numa04.sh Real: 465.97 519.27 484.81 19.62
numa04.sh Sys: 304.43 359.08 334.68 20.64
numa04.sh User: 37544.16 41186.15 39262.44 1314.91
numa05.sh Real: 411.57 457.20 433.29 16.58
numa05.sh Sys: 230.05 435.48 339.95 67.58
numa05.sh User: 33325.54 36896.31 35637.84 1222.64
Testcase Time: Min Max Avg StdDev %Change
numa01.sh Real: 506.35 794.46 599.06 104.26 -10.76%
numa01.sh Sys: 150.37 223.56 195.99 24.94 -25.55%
numa01.sh User: 43450.69 61752.04 49281.50 6635.33 -11.43%
numa02.sh Real: 60.33 62.40 61.31 0.90 -0.195%
numa02.sh Sys: 18.12 31.66 24.28 5.89 -21.49%
numa02.sh User: 5203.91 5325.32 5260.29 49.98 -0.513%
numa03.sh Real: 696.47 853.62 745.80 57.28 4.6339%
numa03.sh Sys: 85.68 123.71 97.89 13.48 7.3347%
numa03.sh User: 55978.45 66418.63 59254.94 3737.97 1.5323%
numa04.sh Real: 444.05 514.83 497.06 26.85 -2.464%
numa04.sh Sys: 230.39 375.79 316.23 48.58 5.8343%
numa04.sh User: 35403.12 41004.10 39720.80 2163.08 -1.153%
numa05.sh Real: 423.09 460.41 439.57 13.92 -1.428%
numa05.sh Sys: 287.38 480.15 369.37 68.52 -7.964%
numa05.sh User: 34732.12 38016.80 36255.85 1070.51 -1.704%
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-5-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Currently load_too_imbalance() cares about the slope of imbalance.
It doesn't care of the direction of the imbalance.
However this may not work if nodes that are being compared have
dissimilar capacities. Few nodes might have more cores than other nodes
in the system. Also unlike traditional load balance at a NUMA sched
domain, multiple requests to migrate from the same source node to same
destination node may run in parallel. This can cause huge load
imbalance. This is specially true on a larger machines with either large
cores per node or more number of nodes in the system. Hence allow
move/swap only if the imbalance is going to reduce.
Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
16 25058.2 25122.9 0.25
1 72950 73850 1.23
(numbers from v1 based on v4.17-rc5)
Testcase Time: Min Max Avg StdDev
numa01.sh Real: 516.14 892.41 739.84 151.32
numa01.sh Sys: 153.16 192.99 177.70 14.58
numa01.sh User: 39821.04 69528.92 57193.87 10989.48
numa02.sh Real: 60.91 62.35 61.58 0.63
numa02.sh Sys: 16.47 26.16 21.20 3.85
numa02.sh User: 5227.58 5309.61 5265.17 31.04
numa03.sh Real: 739.07 917.73 795.75 64.45
numa03.sh Sys: 94.46 136.08 109.48 14.58
numa03.sh User: 57478.56 72014.09 61764.48 5343.69
numa04.sh Real: 442.61 715.43 530.31 96.12
numa04.sh Sys: 224.90 348.63 285.61 48.83
numa04.sh User: 35836.84 47522.47 40235.41 3985.26
numa05.sh Real: 386.13 489.17 434.94 43.59
numa05.sh Sys: 144.29 438.56 278.80 105.78
numa05.sh User: 33255.86 36890.82 34879.31 1641.98
Testcase Time: Min Max Avg StdDev %Change
numa01.sh Real: 435.78 653.81 534.58 83.20 38.39%
numa01.sh Sys: 121.93 187.18 145.90 23.47 21.79%
numa01.sh User: 37082.81 51402.80 43647.60 5409.75 31.03%
numa02.sh Real: 60.64 61.63 61.19 0.40 0.637%
numa02.sh Sys: 14.72 25.68 19.06 4.03 11.22%
numa02.sh User: 5210.95 5266.69 5233.30 20.82 0.608%
numa03.sh Real: 746.51 808.24 780.36 23.88 1.972%
numa03.sh Sys: 97.26 108.48 105.07 4.28 4.197%
numa03.sh User: 58956.30 61397.05 60162.95 1050.82 2.661%
numa04.sh Real: 465.97 519.27 484.81 19.62 9.385%
numa04.sh Sys: 304.43 359.08 334.68 20.64 -14.6%
numa04.sh User: 37544.16 41186.15 39262.44 1314.91 2.478%
numa05.sh Real: 411.57 457.20 433.29 16.58 0.380%
numa05.sh Sys: 230.05 435.48 339.95 67.58 -17.9%
numa05.sh User: 33325.54 36896.31 35637.84 1222.64 -2.12%
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-4-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
task_numa_compare() helps choose the best CPU to move or swap the
selected task. To achieve this task_numa_compare() is called for every
CPU in the node. Currently it evaluates if the task can be moved/swapped
for each of the CPUs. However the move evaluation is mostly independent
of the CPU. Evaluating the move logic once per node, provides scope for
simplifying task_numa_compare().
Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
16 25705.2 25058.2 -2.51
1 74433 72950 -1.99
Running SPECjbb2005 on a 16 node machine and comparing bops/JVM
JVMS LAST_PATCH WITH_PATCH %CHANGE
8 96589.6 105930 9.670
1 181830 178624 -1.76
(numbers from v1 based on v4.17-rc5)
Testcase Time: Min Max Avg StdDev
numa01.sh Real: 440.65 941.32 758.98 189.17
numa01.sh Sys: 183.48 320.07 258.42 50.09
numa01.sh User: 37384.65 71818.14 60302.51 13798.96
numa02.sh Real: 61.24 65.35 62.49 1.49
numa02.sh Sys: 16.83 24.18 21.40 2.60
numa02.sh User: 5219.59 5356.34 5264.03 49.07
numa03.sh Real: 822.04 912.40 873.55 37.35
numa03.sh Sys: 118.80 140.94 132.90 7.60
numa03.sh User: 62485.19 70025.01 67208.33 2967.10
numa04.sh Real: 690.66 872.12 778.49 65.44
numa04.sh Sys: 459.26 563.03 494.03 42.39
numa04.sh User: 51116.44 70527.20 58849.44 8461.28
numa05.sh Real: 418.37 562.28 525.77 54.27
numa05.sh Sys: 299.45 481.00 392.49 64.27
numa05.sh User: 34115.09 41324.02 39105.30 2627.68
Testcase Time: Min Max Avg StdDev %Change
numa01.sh Real: 516.14 892.41 739.84 151.32 2.587%
numa01.sh Sys: 153.16 192.99 177.70 14.58 45.42%
numa01.sh User: 39821.04 69528.92 57193.87 10989.48 5.435%
numa02.sh Real: 60.91 62.35 61.58 0.63 1.477%
numa02.sh Sys: 16.47 26.16 21.20 3.85 0.943%
numa02.sh User: 5227.58 5309.61 5265.17 31.04 -0.02%
numa03.sh Real: 739.07 917.73 795.75 64.45 9.776%
numa03.sh Sys: 94.46 136.08 109.48 14.58 21.39%
numa03.sh User: 57478.56 72014.09 61764.48 5343.69 8.813%
numa04.sh Real: 442.61 715.43 530.31 96.12 46.79%
numa04.sh Sys: 224.90 348.63 285.61 48.83 72.97%
numa04.sh User: 35836.84 47522.47 40235.41 3985.26 46.26%
numa05.sh Real: 386.13 489.17 434.94 43.59 20.88%
numa05.sh Sys: 144.29 438.56 278.80 105.78 40.77%
numa05.sh User: 33255.86 36890.82 34879.31 1641.98 12.11%
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-3-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
'numa_entry' is a struct list_head defined in task_struct, but never used.
No functional change.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-2-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Although we can rely on cpuacct to present the CPU usage of task
groups, it is hard to tell how intense the competition is between
these groups on CPU resources.
Monitoring the wait time or sched_debug of each process could be
very expensive, and there is no good way to accurately represent the
conflict with these info, we need the wait time on group dimension.
Thus we introduce group's wait_sum to represent the resource conflict
between task groups, which is simply the sum of the wait time of
the group's cfs_rq.
The 'cpu.stat' is modified to show the statistic, like:
nr_periods 0
nr_throttled 0
throttled_time 0
wait_sum 2035098795584
Now we can monitor the changes of wait_sum to tell how much a
a task group is suffering in the fight of CPU resources.
For example:
(wait_sum - last_wait_sum) * 100 / (nr_cpu * period_ns) == X%
means the task group paid X percentage of period on waiting
for the CPU.
Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/ff7dae3b-e5f9-7157-1caa-ff02c6b23dc1@linux.alibaba.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Reuse cpu_util_irq() that has been defined for schedutil and set irq util
to 0 when !CONFIG_IRQ_TIME_ACCOUNTING.
But the compiler is not able to optimize the sequence (at least with
aarch64 GCC 7.2.1):
free *= (max - irq);
free /= max;
when irq is fixed to 0
Add a new inline function scale_irq_capacity() that will scale utilization
when irq is accounted. Reuse this funciton in schedutil which applies
similar formula.
Suggested-by: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: rjw@rjwysocki.net
Link: http://lkml.kernel.org/r/1532001606-6689-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
NO_RT_RUNTIME_SHARE feature is used to prevent a CPU borrow enough
runtime with a spin-rt-task.
However, if RT_RUNTIME_SHARE feature is enabled and rt_rq has borrowd
enough rt_runtime at the beginning, rt_runtime can't be restored to
its initial bandwidth rt_runtime after we disable RT_RUNTIME_SHARE.
E.g. on my PC with 4 cores, procedure to reproduce:
1) Make sure RT_RUNTIME_SHARE is enabled
cat /sys/kernel/debug/sched_features
GENTLE_FAIR_SLEEPERS START_DEBIT NO_NEXT_BUDDY LAST_BUDDY
CACHE_HOT_BUDDY WAKEUP_PREEMPTION NO_HRTICK NO_DOUBLE_TICK
LB_BIAS NONTASK_CAPACITY TTWU_QUEUE NO_SIS_AVG_CPU SIS_PROP
NO_WARN_DOUBLE_CLOCK RT_PUSH_IPI RT_RUNTIME_SHARE NO_LB_MIN
ATTACH_AGE_LOAD WA_IDLE WA_WEIGHT WA_BIAS
2) Start a spin-rt-task
./loop_rr &
3) set affinity to the last cpu
taskset -p 8 $pid_of_loop_rr
4) Observe that last cpu have borrowed enough runtime.
cat /proc/sched_debug | grep rt_runtime
.rt_runtime : 950.000000
.rt_runtime : 900.000000
.rt_runtime : 950.000000
.rt_runtime : 1000.000000
5) Disable RT_RUNTIME_SHARE
echo NO_RT_RUNTIME_SHARE > /sys/kernel/debug/sched_features
6) Observe that rt_runtime can not been restored
cat /proc/sched_debug | grep rt_runtime
.rt_runtime : 950.000000
.rt_runtime : 900.000000
.rt_runtime : 950.000000
.rt_runtime : 1000.000000
This patch help to restore rt_runtime after we disable
RT_RUNTIME_SHARE.
Signed-off-by: Hailong Liu <liu.hailong6@zte.com.cn>
Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: zhong.weidong@zte.com.cn
Link: http://lkml.kernel.org/r/1531874815-39357-1-git-send-email-liu.hailong6@zte.com.cn
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Daniel Casini got this warn while running a DL task here at RetisLab:
[ 461.137582] ------------[ cut here ]------------
[ 461.137583] rq->clock_update_flags < RQCF_ACT_SKIP
[ 461.137599] WARNING: CPU: 4 PID: 2354 at kernel/sched/sched.h:967 assert_clock_updated.isra.32.part.33+0x17/0x20
[a ton of modules]
[ 461.137646] CPU: 4 PID: 2354 Comm: label_image Not tainted 4.18.0-rc4+ #3
[ 461.137647] Hardware name: ASUS All Series/Z87-K, BIOS 0801 09/02/2013
[ 461.137649] RIP: 0010:assert_clock_updated.isra.32.part.33+0x17/0x20
[ 461.137649] Code: ff 48 89 83 08 09 00 00 eb c6 66 0f 1f 84 00 00 00 00 00 55 48 c7 c7 98 7a 6c a5 c6 05 bc 0d 54 01 01 48 89 e5 e8 a9 84 fb ff <0f> 0b 5d c3 0f 1f 44 00 00 0f 1f 44 00 00 83 7e 60 01 74 0a 48 3b
[ 461.137673] RSP: 0018:ffffa77e08cafc68 EFLAGS: 00010082
[ 461.137674] RAX: 0000000000000000 RBX: ffff8b3fc1702d80 RCX: 0000000000000006
[ 461.137674] RDX: 0000000000000007 RSI: 0000000000000096 RDI: ffff8b3fded164b0
[ 461.137675] RBP: ffffa77e08cafc68 R08: 0000000000000026 R09: 0000000000000339
[ 461.137676] R10: ffff8b3fd060d410 R11: 0000000000000026 R12: ffffffffa4e14e20
[ 461.137677] R13: ffff8b3fdec22940 R14: ffff8b3fc1702da0 R15: ffff8b3fdec22940
[ 461.137678] FS: 00007efe43ee5700(0000) GS:ffff8b3fded00000(0000) knlGS:0000000000000000
[ 461.137679] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 461.137680] CR2: 00007efe30000010 CR3: 0000000301744003 CR4: 00000000001606e0
[ 461.137680] Call Trace:
[ 461.137684] push_dl_task.part.46+0x3bc/0x460
[ 461.137686] task_woken_dl+0x60/0x80
[ 461.137689] ttwu_do_wakeup+0x4f/0x150
[ 461.137690] ttwu_do_activate+0x77/0x80
[ 461.137692] try_to_wake_up+0x1d6/0x4c0
[ 461.137693] wake_up_q+0x32/0x70
[ 461.137696] do_futex+0x7e7/0xb50
[ 461.137698] __x64_sys_futex+0x8b/0x180
[ 461.137701] do_syscall_64+0x5a/0x110
[ 461.137703] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 461.137705] RIP: 0033:0x7efe4918ca26
[ 461.137705] Code: 00 00 00 74 17 49 8b 48 20 44 8b 59 10 41 83 e3 30 41 83 fb 20 74 1e be 85 00 00 00 41 ba 01 00 00 00 41 b9 01 00 00 04 0f 05 <48> 3d 01 f0 ff ff 73 1f 31 c0 c3 be 8c 00 00 00 49 89 c8 4d 31 d2
[ 461.137738] RSP: 002b:00007efe43ee4928 EFLAGS: 00000283 ORIG_RAX: 00000000000000ca
[ 461.137739] RAX: ffffffffffffffda RBX: 0000000005094df0 RCX: 00007efe4918ca26
[ 461.137740] RDX: 0000000000000001 RSI: 0000000000000085 RDI: 0000000005094e24
[ 461.137741] RBP: 00007efe43ee49c0 R08: 0000000005094e20 R09: 0000000004000001
[ 461.137741] R10: 0000000000000001 R11: 0000000000000283 R12: 0000000000000000
[ 461.137742] R13: 0000000005094df8 R14: 0000000000000001 R15: 0000000000448a10
[ 461.137743] ---[ end trace 187df4cad2bf7649 ]---
This warning happened in the push_dl_task(), because
__add_running_bw()->cpufreq_update_util() is getting the rq_clock of
the later_rq before its update, which takes place at activate_task().
The fix then is to update the rq_clock before calling add_running_bw().
To avoid double rq_clock_update() call, we set ENQUEUE_NOCLOCK flag to
activate_task().
Reported-by: Daniel Casini <daniel.casini@santannapisa.it>
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luca Abeni <luca.abeni@santannapisa.it>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tommaso Cucinotta <tommaso.cucinotta@santannapisa.it>
Fixes: e0367b12674b sched/deadline: Move CPU frequency selection triggering points
Link: http://lkml.kernel.org/r/ca31d073a4788acf0684a8b255f14fea775ccf20.1532077269.git.bristot@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
This commit:
9fb8d5dc4b64 ("stop_machine, Disable preemption when waking two stopper threads")
does not fully address the race condition that can occur
as follows:
On one CPU, call it CPU 3, thread 1 invokes
cpu_stop_queue_two_works(2, 3,...), and the execution is such
that thread 1 queues the works for migration/2 and migration/3,
and is preempted after releasing the locks for migration/2 and
migration/3, but before waking the threads.
Then, On CPU 2, a kworker, call it thread 2, is running,
and it invokes cpu_stop_queue_two_works(1, 2,...), such that
thread 2 queues the works for migration/1 and migration/2.
Meanwhile, on CPU 3, thread 1 resumes execution, and wakes
migration/2 and migration/3. This means that when CPU 2
releases the locks for migration/1 and migration/2, but before
it wakes those threads, it can be preempted by migration/2.
If thread 2 is preempted by migration/2, then migration/2 will
execute the first work item successfully, since migration/3
was woken up by CPU 3, but when it goes to execute the second
work item, it disables preemption, calls multi_cpu_stop(),
and thus, CPU 2 will wait forever for migration/1, which should
have been woken up by thread 2. However migration/1 cannot be
woken up by thread 2, since it is a kworker, so it is affine to
CPU 2, but CPU 2 is running migration/2 with preemption
disabled, so thread 2 will never run.
Disable preemption after queueing works for stopper threads
to ensure that the operation of queueing the works and waking
the stopper threads is atomic.
Co-Developed-by: Prasad Sodagudi <psodagud@codeaurora.org>
Co-Developed-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bigeasy@linutronix.de
Cc: gregkh@linuxfoundation.org
Cc: matt@codeblueprint.co.uk
Fixes: 9fb8d5dc4b64 ("stop_machine, Disable preemption when waking two stopper threads")
Link: http://lkml.kernel.org/r/1531856129-9871-1-git-send-email-isaacm@codeaurora.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The 'group' variable in sched_domain_debug_one() is not checked
when firstly used in cpumask_test_cpu(cpu, sched_group_span(group)),
but it might be NULL (it is checked later in the following while loop)
and may cause NULL pointer dereference.
We need to check it before using to avoid NULL dereference.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: zhong.weidong@zte.com.cn
Link: http://lkml.kernel.org/r/1532319547-33335-1-git-send-email-wang.yi59@zte.com.cn
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
assembly code
The LOCK_PREFIX macro should be used in the __raw_callee_save___pv_queued_spin_unlock()
assembly code, so that the lock prefix can be patched out on UP systems.
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joe Mario <jmario@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1531858560-21547-1-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
If an i2c topology has instances of nested muxes, then a lockdep splat
is produced when when i2c_parent_lock_bus() is called. Here is an
example:
============================================
WARNING: possible recursive locking detected
--------------------------------------------
insmod/68159 is trying to acquire lock:
(i2c_register_adapter#2){+.+.}, at: i2c_parent_lock_bus+0x32/0x50 [i2c_mux]
but task is already holding lock:
(i2c_register_adapter#2){+.+.}, at: i2c_parent_lock_bus+0x32/0x50 [i2c_mux]
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(i2c_register_adapter#2);
lock(i2c_register_adapter#2);
*** DEADLOCK ***
May be due to missing lock nesting notation
1 lock held by insmod/68159:
#0: (i2c_register_adapter#2){+.+.}, at: i2c_parent_lock_bus+0x32/0x50 [i2c_mux]
stack backtrace:
CPU: 13 PID: 68159 Comm: insmod Tainted: G O
Call Trace:
dump_stack+0x67/0x98
__lock_acquire+0x162e/0x1780
lock_acquire+0xba/0x200
rt_mutex_lock+0x44/0x60
i2c_parent_lock_bus+0x32/0x50 [i2c_mux]
i2c_parent_lock_bus+0x3e/0x50 [i2c_mux]
i2c_smbus_xfer+0xf0/0x700
i2c_smbus_read_byte+0x42/0x70
my2c_init+0xa2/0x1000 [my2c]
do_one_initcall+0x51/0x192
do_init_module+0x62/0x216
load_module+0x20f9/0x2b50
SYSC_init_module+0x19a/0x1c0
SyS_init_module+0xe/0x10
do_syscall_64+0x6c/0x1a0
entry_SYSCALL_64_after_hwframe+0x42/0xb7
Reported-by: John Sperbeck <jsperbeck@google.com>
Tested-by: John Sperbeck <jsperbeck@google.com>
Signed-off-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Deepa Dinamani <deepadinamani@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Chang <dpf@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wolfram Sang <wsa@the-dreams.de>
Link: http://lkml.kernel.org/r/20180720083914.1950-3-peda@axentia.se
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Needed for annotating rt_mutex locks.
Tested-by: John Sperbeck <jsperbeck@google.com>
Signed-off-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Deepa Dinamani <deepadinamani@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Chang <dpf@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wolfram Sang <wsa@the-dreams.de>
Link: http://lkml.kernel.org/r/20180720083914.1950-2-peda@axentia.se
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
NVRAM is designed to work with Broadcom's SDK Linux kernel which fakes
PCI domain 0 for all internal MMIO devices. Since official Linux kernel
uses platform devices for that purpose there is a mismatch in numbering
PCI domains.
There used to be a fix for that problem but it was accidentally dropped
during the last firmware loading rework. That resulted in brcmfmac not
being able to extract device specific NVRAM content and all kind of
calibration problems.
Reported-by: Aditya Xavier <adityaxavier@gmail.com>
Fixes: 2baa3aaee27f ("brcmfmac: introduce brcmf_fw_alloc_request() function")
Cc: stable@vger.kernel.org # v4.17+
Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
Acked-by: Arend van Spriel <arend.vanspriel@broadcom.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
Add new device IDs for the 9000 series.
Cc: stable@vger.kernel.org # 4.14
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
Now that the early boot rework is upstream we can enable the gcc plugins
again. See git commit 72f108b308707f21499e0ac05bf7370360cf06d8
"s390: disable gcc plugins" for reference.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
|
|
The s390 build currently fails with the latent entropy plugin:
arch/s390/kernel/als.o: In function `verify_facilities':
als.c:(.init.text+0x24): undefined reference to `latent_entropy'
als.c:(.init.text+0xae): undefined reference to `latent_entropy'
make[3]: *** [arch/s390/boot/compressed/vmlinux] Error 1
make[2]: *** [arch/s390/boot/compressed/vmlinux] Error 2
make[1]: *** [bzImage] Error 2
This will be fixed with the early boot rework from Vasily, which
is planned for the 4.19 merge window.
For 4.18 the simplest solution is to disable the gcc plugins and
reenable them after the early boot rework is upstream.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
(cherry picked from commit 2fba3573f1cf876ad94992c256c5c410039e60b4)
|
|
Martin KaFai Lau says:
====================
The series allows the BPF loader to figure out the btf_key_id
and btf_value_id from a map's name by using BPF_ANNOTATE_KV_PAIR()
similarly as in iproute2 commit f823f36012fb ("bpf: implement
btf handling and map annotation").
It also removes the old 'typedef' way which requires two separate
typedefs (one for the key and one for the value).
By doing this, iproute2 and libbpf have one consistent way to
figure out the btf_key_type_id and btf_value_type_id for a map.
The first two patches are some prep/cleanup works. The last patch
introduces BPF_ANNOTATE_KV_PAIR.
v3:
- Replace some more *int*_t and u* usages with the
equivalent __[su]* in btf.c
v2:
- Fix the incorrect '&&' check on container_type
in bpf_map_find_btf_info().
- Expose the existing static btf_type_by_id() instead of
creating a new one.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
This patch introduces BPF_ANNOTATE_KV_PAIR to signal the
bpf loader about the btf key_type and value_type of a bpf map.
Please refer to the changes in test_btf_haskv.c for its usage.
Both iproute2 and libbpf loader will then have the same
convention to find out the map's btf_key_type_id and
btf_value_type_id from a map's name.
Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf")
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
This patch replaces [u]int32_t and [u]int64_t usage with
__[su]32 and __[su]64. The same change goes for [u]int16_t
and [u]int8_t.
Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
This patch sync the uapi btf.h to tools/
Fixes: 36fc3c8c282c bpf: btf: Clean up BTF_INT_BITS() in uapi btf.h
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
The ar724x pci driver expects the PCIe controller to be brought out of
reset by the bootloader.
At least the AVM Fritz 300E bootloader doesn't take care of releasing
the different PCIe controller related resets which causes an endless
hang as soon as either the PCIE Reset register (0x180f0018) or the PCI
Application Control register (0x180f0000) is read from.
Do the full "PCIE Root Complex Initialization Sequence" if the PCIe
host controller is still in reset during probing.
The QCA u-boot sleeps 10ms after the PCIE Application Control bit is
set to ready. It has been shown that 10ms might not be enough time if
PCIe should be used right after setting the bit. During my tests it
took up to 20ms till the link was up. Giving the link up to 100ms
should work for all cases.
Signed-off-by: Mathias Kresin <dev@kresin.me>
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19916/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
This patch ensures, that the pinmux register is properly setup for the
boot console UART when early_printk is enabled.
[paul.burton@mips.com:
- s/poinmux/pinmux/
- s/uart/UART/
- Drop extraneous parentheses.]
Signed-off-by: Gabor Juhos <juhosg@openwrt.org>
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
This patch adds a few additional cpu feature overrides so that they do not
need to be probed at runtime.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19914/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
This patch disables irq on reboot to fix hang issues that were observed
due to pending interrupts.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19913/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
The pinmux on QCA SoCs is controlled by a single register. The
"pinctrl-single" driver can be used but requires the target
to select PINCTRL.
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19909/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
This patch adds support for 2 new types of QCA silicon. TP9343 is
essentially the same as the QCA956X but is licensed by TPLink.
Signed-off-by: Weijie Gao <hackpascal@gmail.com>
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net>
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19911/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
This patch adds many new registers for various QCA MIPS SoCs. The patch is
an aggragate of many contributions made to OpenWrt.
Signed-off-by: Gabor Juhos <juhosg@openwrt.org>
Signed-off-by: Henryk Heisig <hyniu@o2.pl>
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net>
Signed-off-by: Weijie Gao <hackpascal@gmail.com>
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Julien Dusser <julien.dusser@free.fr>
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19910/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS fixes from Paul Burton:
"A couple more MIPS fixes for 4.18:
- Fix an off-by-one in reporting PCI resource sizes to userland which
regressed in v3.12.
- Fix writes to DDR controller registers used to flush write buffers,
which regressed with some refactoring in v4.2"
* tag 'mips_fixes_4.18_4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
MIPS: ath79: fix register address in ath79_ddr_wb_flush()
MIPS: Fix off-by-one in pci_resource_to_user()
|
|
Ocelot now has a u-boot port, allow building FIT images instead of relying
on the legacy detection and builtin DTB.
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Reviewed-by: James Hogan <jhogan@kernel.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19632/
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
|
|
Get rid of extern declarations in .c functions and included
the necessary header file. Also remove unused UART declares.
Signed-off-by: Steven J. Hill <steven.hill@cavium.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19477/
Cc: linux-mips@linux-mips.org
|
|
Pull networking fixes from David Miller:
1) Handle stations tied to AP_VLANs properly during mac80211 hw
reconfig. From Manikanta Pubbisetty.
2) Fix jump stack depth validation in nf_tables, from Taehee Yoo.
3) Fix quota handling in aRFS flow expiration of mlx5 driver, from Eran
Ben Elisha.
4) Exit path handling fix in powerpc64 BPF JIT, from Daniel Borkmann.
5) Use ptr_ring_consume_bh() in page pool code, from Tariq Toukan.
6) Fix cached netdev name leak in nf_tables, from Florian Westphal.
7) Fix memory leaks on chain rename, also from Florian Westphal.
8) Several fixes to DCTCP congestion control ACK handling, from Yuchunk
Cheng.
9) Missing rcu_read_unlock() in CAIF protocol code, from Yue Haibing.
10) Fix link local address handling with VRF, from David Ahern.
11) Don't clobber 'err' on a successful call to __skb_linearize() in
skb_segment(). From Eric Dumazet.
12) Fix vxlan fdb notification races, from Roopa Prabhu.
13) Hash UDP fragments consistently, from Paolo Abeni.
14) If TCP receives lots of out of order tiny packets, we do really
silly stuff. Make the out-of-order queue ending more robust to this
kind of behavior, from Eric Dumazet.
15) Don't leak netlink dump state in nf_tables, from Florian Westphal.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (76 commits)
net: axienet: Fix double deregister of mdio
qmi_wwan: fix interface number for DW5821e production firmware
ip: in cmsg IP(V6)_ORIGDSTADDR call pskb_may_pull
bnx2x: Fix invalid memory access in rss hash config path.
net/mlx4_core: Save the qpn from the input modifier in RST2INIT wrapper
r8169: restore previous behavior to accept BIOS WoL settings
cfg80211: never ignore user regulatory hint
sock: fix sg page frag coalescing in sk_alloc_sg
netfilter: nf_tables: move dumper state allocation into ->start
tcp: add tcp_ooo_try_coalesce() helper
tcp: call tcp_drop() from tcp_data_queue_ofo()
tcp: detect malicious patterns in tcp_collapse_ofo_queue()
tcp: avoid collapses in tcp_prune_queue() if possible
tcp: free batches of packets in tcp_prune_ofo_queue()
ip: hash fragments consistently
ipv6: use fib6_info_hold_safe() when necessary
can: xilinx_can: fix power management handling
can: xilinx_can: fix incorrect clear of non-processed interrupts
can: xilinx_can: fix RX overflow interrupt not being enabled
can: xilinx_can: keep only 1-2 frames in TX FIFO to fix TX accounting
...
|
|
It is not immediately obvious what the expected inputs to these fault
handlers is and how they calculate the number of unset bytes. Having
stared deeply at this in order to fix some corner cases, add some
comments to assist those who follow.
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19339/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: <linux-mips@linux-mips.org>
Cc: <linux-kernel@vger.kernel.org>
|
|
The __clear_user function is defined to return the number of bytes that
could not be cleared. From the underlying memset / bzero implementation
this means setting register a2 to that number on return. Currently if a
page fault is triggered within the MIPSr6 version of setting of initial
unaligned bytes, the value loaded into a2 on return is meaningless.
During the MIPSr6 version of the initial unaligned bytes block, register
a2 contains the number of bytes to be set beyond the initial unaligned
bytes. The t0 register is initally set to the number of unaligned bytes
- STORSIZE, effectively a negative version of the number of unaligned
bytes. This is then incremented before each byte is saved.
The label .Lbyte_fixup\@ is jumped to on page fault. Currently the value
in a2 is incorrectly replaced by 0 - t0 + 1, effectively the number of
unaligned bytes remaining. This leads to the failures being reported by
the following test code:
static int __init test_clear_user(void)
{
int j, k;
pr_info("\n\n\nTesting clear_user\n");
for (j = 0; j < 512; j++) {
if ((k = clear_user(NULL+3, j)) != j) {
pr_err("clear_user (NULL %d) returned %d\n", j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
Which reports:
[ 3.965439] Testing clear_user
[ 3.973169] clear_user (NULL 8) returned 6
[ 3.976782] clear_user (NULL 9) returned 6
[ 3.980390] clear_user (NULL 10) returned 6
[ 3.984052] clear_user (NULL 11) returned 6
[ 3.987524] clear_user (NULL 12) returned 6
Fix this by subtracting t0 from a2 (rather than $0), effectivey giving:
unset_bytes = (#bytes - (#unaligned bytes)) - (-#unaligned bytes remaining + 1) + 1
a2 = a2 - t0 + 1
This fixes the value returned from __clear user when the number of bytes
to set is > LONGSIZE and the address is invalid and unaligned.
Unfortunately, this breaks the fixup handling for unaligned bytes after
the final long, where register a2 still contains the number of bytes
remaining to be set and the t0 register is to 0 - the number of
unaligned bytes remaining.
Because t0 is now is now subtracted from a2 rather than 0, the number of
bytes unset is reported incorrectly:
static int __init test_clear_user(void)
{
char *test;
int j, k;
pr_info("\n\n\nTesting clear_user\n");
test = vmalloc(PAGE_SIZE);
for (j = 256; j < 512; j++) {
if ((k = clear_user(test + PAGE_SIZE - 254, j)) != j - 254) {
pr_err("clear_user (%px %d) returned %d\n",
test + PAGE_SIZE - 254, j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
[ 3.976775] clear_user (c00000000000df02 256) returned 4
[ 3.981957] clear_user (c00000000000df02 257) returned 6
[ 3.986425] clear_user (c00000000000df02 258) returned 8
[ 3.990850] clear_user (c00000000000df02 259) returned 10
[ 3.995332] clear_user (c00000000000df02 260) returned 12
[ 3.999815] clear_user (c00000000000df02 261) returned 14
Fix this by ensuring that a2 is set to 0 during the set of final
unaligned bytes.
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: 8c56208aff77 ("MIPS: lib: memset: Add MIPS R6 support")
Patchwork: https://patchwork.linux-mips.org/patch/19338/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org # v4.0+
|
|
If the registration fails then mdio_unregister is called.
However at unbind the unregister ia attempted again resulting
in the below crash
[ 73.544038] kernel BUG at drivers/net/phy/mdio_bus.c:415!
[ 73.549362] Internal error: Oops - BUG: 0 [#1] SMP
[ 73.554127] Modules linked in:
[ 73.557168] CPU: 0 PID: 2249 Comm: sh Not tainted 4.14.0 #183
[ 73.562895] Hardware name: xlnx,zynqmp (DT)
[ 73.567062] task: ffffffc879e41180 task.stack: ffffff800cbe0000
[ 73.572973] PC is at mdiobus_unregister+0x84/0x88
[ 73.577656] LR is at axienet_mdio_teardown+0x18/0x30
[ 73.582601] pc : [<ffffff80085fa4cc>] lr : [<ffffff8008616858>]
pstate: 20000145
[ 73.589981] sp : ffffff800cbe3c30
[ 73.593277] x29: ffffff800cbe3c30 x28: ffffffc879e41180
[ 73.598573] x27: ffffff8008a21000 x26: 0000000000000040
[ 73.603868] x25: 0000000000000124 x24: ffffffc879efe920
[ 73.609164] x23: 0000000000000060 x22: ffffffc879e02000
[ 73.614459] x21: ffffffc879e02800 x20: ffffffc87b0b8870
[ 73.619754] x19: ffffffc879e02800 x18: 000000000000025d
[ 73.625050] x17: 0000007f9a719ad0 x16: ffffff8008195bd8
[ 73.630345] x15: 0000007f9a6b3d00 x14: 0000000000000010
[ 73.635640] x13: 74656e7265687465 x12: 0000000000000030
[ 73.640935] x11: 0000000000000030 x10: 0101010101010101
[ 73.646231] x9 : 241f394f42533300 x8 : ffffffc8799f6e98
[ 73.651526] x7 : ffffffc8799f6f18 x6 : ffffffc87b0ba318
[ 73.656822] x5 : ffffffc87b0ba498 x4 : 0000000000000000
[ 73.662117] x3 : 0000000000000000 x2 : 0000000000000008
[ 73.667412] x1 : 0000000000000004 x0 : ffffffc8799f4000
[ 73.672708] Process sh (pid: 2249, stack limit = 0xffffff800cbe0000)
Fix the same by making the bus NULL on unregister.
Signed-off-by: Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The original mapping for the DW5821e was done using a development
version of the firmware. Confirmed with the vendor that the final
USB layout ends up exposing the QMI control/data ports in USB
config #1, interface #0, not in interface #1 (which is now a HID
interface).
T: Bus=01 Lev=03 Prnt=04 Port=00 Cnt=01 Dev#= 16 Spd=480 MxCh= 0
D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 2
P: Vendor=413c ProdID=81d7 Rev=03.18
S: Manufacturer=DELL
S: Product=DW5821e Snapdragon X20 LTE
S: SerialNumber=0123456789ABCDEF
C: #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=500mA
I: If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan
I: If#= 1 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid
I: If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I: If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I: If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option
I: If#= 5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option
Fixes: e7e197edd09c25 ("qmi_wwan: add support for the Dell Wireless 5821e module")
Signed-off-by: Aleksander Morgado <aleksander@aleksander.es>
Acked-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Syzbot reported a read beyond the end of the skb head when returning
IPV6_ORIGDSTADDR:
BUG: KMSAN: kernel-infoleak in put_cmsg+0x5ef/0x860 net/core/scm.c:242
CPU: 0 PID: 4501 Comm: syz-executor128 Not tainted 4.17.0+ #9
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x185/0x1d0 lib/dump_stack.c:113
kmsan_report+0x188/0x2a0 mm/kmsan/kmsan.c:1125
kmsan_internal_check_memory+0x138/0x1f0 mm/kmsan/kmsan.c:1219
kmsan_copy_to_user+0x7a/0x160 mm/kmsan/kmsan.c:1261
copy_to_user include/linux/uaccess.h:184 [inline]
put_cmsg+0x5ef/0x860 net/core/scm.c:242
ip6_datagram_recv_specific_ctl+0x1cf3/0x1eb0 net/ipv6/datagram.c:719
ip6_datagram_recv_ctl+0x41c/0x450 net/ipv6/datagram.c:733
rawv6_recvmsg+0x10fb/0x1460 net/ipv6/raw.c:521
[..]
This logic and its ipv4 counterpart read the destination port from
the packet at skb_transport_offset(skb) + 4.
With MSG_MORE and a local SOCK_RAW sender, syzbot was able to cook a
packet that stores headers exactly up to skb_transport_offset(skb) in
the head and the remainder in a frag.
Call pskb_may_pull before accessing the pointer to ensure that it lies
in skb head.
Link: http://lkml.kernel.org/r/CAF=yD-LEJwZj5a1-bAAj2Oy_hKmGygV6rsJ_WOrAYnv-fnayiQ@mail.gmail.com
Reported-by: syzbot+9adb4b567003cac781f0@syzkaller.appspotmail.com
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rx hash/filter table configuration uses rss_conf_obj to configure filters
in the hardware. This object is initialized only when the interface is
brought up.
This patch adds driver changes to configure rss params only when the device
is in opened state. In port disabled case, the config will be cached in the
driver structure which will be applied in the successive load path.
Please consider applying it to 'net' branch.
Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Function mlx4_RST2INIT_QP_wrapper saved the qp number passed in the qp
context, rather than the one passed in the input modifier.
However, the qp number in the qp context is not defined as a
required parameter by the FW. Therefore, drivers may choose to not
specify the qp number in the qp context for the reset-to-init transition.
Thus, we must save the qp number passed in the command input modifier --
which is always present. (This saved qp number is used as the input
modifier for command 2RST_QP when a slave's qp's are destroyed).
Fixes: c82e9aa0a8bc ("mlx4_core: resource tracking for HCA resources used by guests")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The commit cited below checked that the port numbers provided in the
primary and alt AVs are legal.
That is sufficient to prevent a kernel panic. However, it is not
sufficient for correct operation.
In Linux, AVs (both primary and alt) must be completely self-described.
We do not accept an AV from userspace without an embedded port number.
(This has been the case since kernel 3.14 commit dbf727de7440
("IB/core: Use GID table in AH creation and dmac resolution")).
For the primary AV, this embedded port number must match the port number
specified with IB_QP_PORT.
We also expect the port number embedded in the alt AV to match the
alt_port_num value passed by the userspace driver in the modify_qp command
base structure.
Add these checks to modify_qp.
Cc: <stable@vger.kernel.org> # 4.16
Fixes: 5d4c05c3ee36 ("RDMA/uverbs: Sanitize user entered port numbers prior to access it")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Many architectural features have over time moved from being optional to
either be required or removed by newer architecture releases. This means
that in many cases we can know at compile time whether a feature will be
supported or not purely due to the knowledge we have about the ISA the
kernel build is targeting.
This patch introduces a bunch of utility macros for checking for
supported options, ASEs & combinations of those with ISA revisions. It
then makes use of these in the default definitions of cpu_has_* macros.
The result is that many of the macros become compile-time constant,
allowing more optimisation opportunities for the compiler - particularly
with kernels built for later ISA revisions.
To demonstrate the effect of this patch, the following table shows the
size in bytes of the kernel binary as reported by scripts/bloat-o-meter
for v4.12-rc4 maltasmvp_defconfig kernels with & without this patch. A
variant of maltasmvp_defconfig with CONFIG_CPU_MIPS32_R6 selected is
also shown, to demonstrate that MIPSr6 systems benefit more due to extra
features becoming required by that architecture revision. Builds of
pistachio_defconfig are also shown, as although this is a MIPSr2
platform it doesn't hardcode any features in a machine-specific
cpu-feature-overrides.h, which allows it to gain more from this patch
than the equivalent Malta r2 build.
Config | Before | After | Change
----------------|---------|---------|---------
maltasmvp | 7248316 | 7247714 | -602
maltasmvp + r6 | 6955595 | 6950777 | -4818
pistachio | 8650977 | 8363898 | -287079
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/16360/
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
|
|
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
Commit 7edf6d314cd0 tried to resolve an inconsistency (BIOS WoL
settings are accepted, but device isn't wakeup-enabled) resulting
from a previous broken-BIOS workaround by making disabled WoL the
default.
This however had some side effects, most likely due to a broken BIOS
some systems don't properly resume from suspend when the MagicPacket
WoL bit isn't set in the chip, see
https://bugzilla.kernel.org/show_bug.cgi?id=200195
Therefore restore the WoL behavior from 4.16.
Reported-by: Albert Astals Cid <aacid@kde.org>
Fixes: 7edf6d314cd0 ("r8169: disable WOL per default")
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Make use of the spi-gpio driver to provide SPI support on the Ingenic
JZ4780 SoC using the pins that can be used with the SSI0 device as
GPIOs, until such time as we have support for the Ingenic SPI/SSI
controller.
[paul.burton@mips.com: Rewrite commit message.]
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19489/
Cc: James Hogan <jhogan@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: devicetree@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
|
|
Enable CONFIG_SPI_GPIO in ci20_defconfig, in order to make use of the
spi-gpio driver in a further commit.
[paul.burton@mips.com: Rewrite commit message.]
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19488/
Cc: James Hogan <jhogan@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: devicetree@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
|
|
The scsi block layer requires requests claimed by the error handling be
completed by the error handler. A previous commit allowed completions
to proceed for blk-mq, breaking that assumption.
This patch prevents completions that may race with the timeout handler
by marking the state to complete, restoring the previous behavior.
Fixes: 12f5b931 ("blk-mq: Remove generation seqeunce")
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|