summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-07-10sch_cake: Conditionally split GSO segmentsToke Høiland-Jørgensen
At lower bandwidths, the transmission time of a single GSO segment can add an unacceptable amount of latency due to HOL blocking. Furthermore, with a software shaper, any tuning mechanism employed by the kernel to control the maximum size of GSO segments is thrown off by the artificial limit on bandwidth. For this reason, we split GSO segments into their individual packets iff the shaper is active and configured to a bandwidth <= 1 Gbps. Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10sch_cake: Add overhead compensation support to the rate shaperToke Høiland-Jørgensen
This commit adds configurable overhead compensation support to the rate shaper. With this feature, userspace can configure the actual bottleneck link overhead and encapsulation mode used, which will be used by the shaper to calculate the precise duration of each packet on the wire. This feature is needed because CAKE is often deployed one or two hops upstream of the actual bottleneck (which can be, e.g., inside a DSL or cable modem). In this case, the link layer characteristics and overhead reported by the kernel does not match the actual bottleneck. Being able to set the actual values in use makes it possible to configure the shaper rate much closer to the actual bottleneck rate (our experience shows it is possible to get with 0.1% of the actual physical bottleneck rate), thus keeping latency low without sacrificing bandwidth. The overhead compensation has three tunables: A fixed per-packet overhead size (which, if set, will be accounted from the IP packet header), a minimum packet size (MPU) and a framing mode supporting either ATM or PTM framing. We include a set of common keywords in TC to help users configure the right parameters. If no overhead value is set, the value reported by the kernel is used. Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10sch_cake: Add DiffServ handlingToke Høiland-Jørgensen
This adds support for DiffServ-based priority queueing to CAKE. If the shaper is in use, each priority tier gets its own virtual clock, which limits that tier's rate to a fraction of the overall shaped rate, to discourage trying to game the priority mechanism. CAKE defaults to a simple, three-tier mode that interprets most code points as "best effort", but places CS1 traffic into a low-priority "bulk" tier which is assigned 1/16 of the total rate, and a few code points indicating latency-sensitive or control traffic (specifically TOS4, VA, EF, CS6, CS7) into a "latency sensitive" high-priority tier, which is assigned 1/4 rate. The other supported DiffServ modes are a 4-tier mode matching the 802.11e precedence rules, as well as two 8-tier modes, one of which implements strict precedence of the eight priority levels. This commit also adds an optional DiffServ 'wash' mode, which will zero out the DSCP fields of any packet passing through CAKE. While this can technically be done with other mechanisms in the kernel, having the feature available in CAKE significantly decreases configuration complexity; and the implementation cost is low on top of the other DiffServ-handling code. Filters and applications can set the skb->priority field to override the DSCP-based classification into tiers. If TC_H_MAJ(skb->priority) matches CAKE's qdisc handle, the minor number will be interpreted as a priority tier if it is less than or equal to the number of configured priority tiers. Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10sch_cake: Add NAT awareness to packet classifierToke Høiland-Jørgensen
When CAKE is deployed on a gateway that also performs NAT (which is a common deployment mode), the host fairness mechanism cannot distinguish internal hosts from each other, and so fails to work correctly. To fix this, we add an optional NAT awareness mode, which will query the kernel conntrack mechanism to obtain the pre-NAT addresses for each packet and use that in the flow and host hashing. When the shaper is enabled and the host is already performing NAT, the cost of this lookup is negligible. However, in unlimited mode with no NAT being performed, there is a significant CPU cost at higher bandwidths. For this reason, the feature is turned off by default. Cc: netfilter-devel@vger.kernel.org Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10netfilter: Add nf_ct_get_tuple_skb global lookup functionToke Høiland-Jørgensen
This adds a global netfilter function to extract a conntrack tuple from an skb. The function uses a new function added to nf_ct_hook, which will try to get the tuple from skb->_nfct, and do a full lookup if that fails. This makes it possible to use the lookup function before the skb has passed through the conntrack init hooks (e.g., in an ingress qdisc). The tuple is copied to the caller to avoid issues with reference counting. The function returns false if conntrack is not loaded, allowing it to be used without incurring a module dependency on conntrack. This is used by the NAT mode in sch_cake. Cc: netfilter-devel@vger.kernel.org Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10sch_cake: Add optional ACK filterToke Høiland-Jørgensen
The ACK filter is an optional feature of CAKE which is designed to improve performance on links with very asymmetrical rate limits. On such links (which are unfortunately quite prevalent, especially for DSL and cable subscribers), the downstream throughput can be limited by the number of ACKs capable of being transmitted in the *upstream* direction. Filtering ACKs can, in general, have adverse effects on TCP performance because it interferes with ACK clocking (especially in slow start), and it reduces the flow's resiliency to ACKs being dropped further along the path. To alleviate these drawbacks, the ACK filter in CAKE tries its best to always keep enough ACKs queued to ensure forward progress in the TCP flow being filtered. It does this by only filtering redundant ACKs. In its default 'conservative' mode, the filter will always keep at least two redundant ACKs in the queue, while in 'aggressive' mode, it will filter down to a single ACK. The ACK filter works by inspecting the per-flow queue on every packet enqueue. Starting at the head of the queue, the filter looks for another eligible packet to drop (so the ACK being dropped is always closer to the head of the queue than the packet being enqueued). An ACK is eligible only if it ACKs *fewer* bytes than the new packet being enqueued, including any SACK options. This prevents duplicate ACKs from being filtered, to avoid interfering with retransmission logic. In addition, we check TCP header options and only drop those that are known to not interfere with sender state. In particular, packets with unknown option codes are never dropped. In aggressive mode, an eligible packet is always dropped, while in conservative mode, at least two ACKs are kept in the queue. Only pure ACKs (with no data segments) are considered eligible for dropping, but when an ACK with data segments is enqueued, this can cause another pure ACK to become eligible for dropping. The approach described above ensures that this ACK filter avoids most of the drawbacks of a naive filtering mechanism that only keeps flow state but does not inspect the queue. This is the rationale for including the ACK filter in CAKE itself rather than as separate module (as the TC filter, for instance). Our performance evaluation has shown that on a 30/1 Mbps link with a bidirectional traffic test (RRUL), turning on the ACK filter on the upstream link improves downstream throughput by ~20% (both modes) and upstream throughput by ~12% in conservative mode and ~40% in aggressive mode, at the cost of ~5ms of inter-flow latency due to the increased congestion. In *really* pathological cases, the effect can be a lot more; for instance, the ACK filter increases the achievable downstream throughput on a link with 100 Kbps in the upstream direction by an order of magnitude (from ~2.5 Mbps to ~25 Mbps). Finally, even though we consider the ACK filter to be safer than most, we do not recommend turning it on everywhere: on more symmetrical link bandwidths the effect is negligible at best. Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10sch_cake: Add ingress modeToke Høiland-Jørgensen
The ingress mode is meant to be enabled when CAKE runs downlink of the actual bottleneck (such as on an IFB device). The mode changes the shaper to also account dropped packets to the shaped rate, as these have already traversed the bottleneck. Enabling ingress mode will also tune the AQM to always keep at least two packets queued *for each flow*. This is done by scaling the minimum queue occupancy level that will disable the AQM by the number of active bulk flows. The rationale for this is that retransmits are more expensive in ingress mode, since dropped packets have to traverse the bottleneck again when they are retransmitted; thus, being more lenient and keeping a minimum number of packets queued will improve throughput in cases where the number of active flows are so large that they saturate the bottleneck even at their minimum window size. This commit also adds a separate switch to enable ingress mode rate autoscaling. If enabled, the autoscaling code will observe the actual traffic rate and adjust the shaper rate to match it. This can help avoid latency increases in the case where the actual bottleneck rate decreases below the shaped rate. The scaling filters out spikes by an EWMA filter. Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10sched: Add Common Applications Kept Enhanced (cake) qdiscToke Høiland-Jørgensen
sch_cake targets the home router use case and is intended to squeeze the most bandwidth and latency out of even the slowest ISP links and routers, while presenting an API simple enough that even an ISP can configure it. Example of use on a cable ISP uplink: tc qdisc add dev eth0 cake bandwidth 20Mbit nat docsis ack-filter To shape a cable download link (ifb and tc-mirred setup elided) tc qdisc add dev ifb0 cake bandwidth 200mbit nat docsis ingress wash CAKE is filled with: * A hybrid Codel/Blue AQM algorithm, "Cobalt", tied to an FQ_Codel derived Flow Queuing system, which autoconfigures based on the bandwidth. * A novel "triple-isolate" mode (the default) which balances per-host and per-flow FQ even through NAT. * An deficit based shaper, that can also be used in an unlimited mode. * 8 way set associative hashing to reduce flow collisions to a minimum. * A reasonable interpretation of various diffserv latency/loss tradeoffs. * Support for zeroing diffserv markings for entering and exiting traffic. * Support for interacting well with Docsis 3.0 shaper framing. * Extensive support for DSL framing types. * Support for ack filtering. * Extensive statistics for measuring, loss, ecn markings, latency variation. A paper describing the design of CAKE is available at https://arxiv.org/abs/1804.07617, and will be published at the 2018 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN). This patch adds the base shaper and packet scheduler, while subsequent commits add the optional (configurable) features. The full userspace API and most data structures are included in this commit, but options not understood in the base version will be ignored. Various versions baking have been available as an out of tree build for kernel versions going back to 3.10, as the embedded router world has been running a few years behind mainline Linux. A stable version has been generally available on lede-17.01 and later. sch_cake replaces a combination of iptables, tc filter, htb and fq_codel in the sqm-scripts, with sane defaults and vastly simpler configuration. CAKE's principal author is Jonathan Morton, with contributions from Kevin Darbyshire-Bryant, Toke Høiland-Jørgensen, Sebastian Moeller, Ryan Mounce, Tony Ambardar, Dean Scarff, Nils Andreas Svee, Dave Täht, and Loganaden Velvindron. Testing from Pete Heist, Georgios Amanakis, and the many other members of the cake@lists.bufferbloat.net mailing list. tc -s qdisc show dev eth2 qdisc cake 8017: root refcnt 2 bandwidth 1Gbit diffserv3 triple-isolate split-gso rtt 100.0ms noatm overhead 38 mpu 84 Sent 51504294511 bytes 37724591 pkt (dropped 6, overlimits 64958695 requeues 12) backlog 0b 0p requeues 12 memory used: 1053008b of 15140Kb capacity estimate: 970Mbit min/max network layer size: 28 / 1500 min/max overhead-adjusted size: 84 / 1538 average network hdr offset: 14 Bulk Best Effort Voice thresh 62500Kbit 1Gbit 250Mbit target 5.0ms 5.0ms 5.0ms interval 100.0ms 100.0ms 100.0ms pk_delay 5us 5us 6us av_delay 3us 2us 2us sp_delay 2us 1us 1us backlog 0b 0b 0b pkts 3164050 25030267 9530280 bytes 3227519915 35396974782 12879808898 way_inds 0 8 0 way_miss 21 366 25 way_cols 0 0 0 drops 5 0 1 marks 0 0 0 ack_drop 0 0 0 sp_flows 1 3 0 bk_flows 0 1 1 un_flows 0 0 0 max_len 68130 68130 68130 Tested-by: Pete Heist <peteheist@gmail.com> Tested-by: Georgios Amanakis <gamanakis@gmail.com> Signed-off-by: Dave Taht <dave.taht@gmail.com> Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-10scsi: cxlflash: fix assignment of the backend operationsCédric Le Goater
commit cd43c221bb5e ("scsi: cxlflash: Isolate external module dependencies") introduced the use of ifdefs to avoid compilation errors when one of the possible backend driver, CXL or OCXL, is not compiled. Unfortunately, the wrong defines are used and the backend ops are never assigned, leading to a kernel crash in any case when the cxlflash module is loaded. Signed-off-by: Cédric Le Goater <clg@kaod.org> Acked-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10scsi: qedi: Send driver state to MFWManish Rangankar
In case of iSCSI offload BFS environment, MFW requires to mark virtual link based upon qedi load status. Signed-off-by: Manish Rangankar <manish.rangankar@qlogic.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10scsi: qedf: Send the driver state to MFWSaurav Kashyap
Need to notify firmware when driver is loaded and unloaded. Signed-off-by: Saurav Kashyap <saurav.kashyap@cavium.com> Signed-off-by: Chad Dupuis <chad.dupuis@cavium.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10scsi: hpsa: correct enclosure sas addressDon Brace
The original complaint was the lsscsi -t showed the same SAS address of the two enclosures (SEP devices). In fact the SAS address was being set to the Enclosure Logical Identifier (ELI). Reviewed-by: Scott Teel <scott.teel@microsemi.com> Reviewed-by: Kevin Barnett <kevin.barnett@microsemi.com> Signed-off-by: Don Brace <don.brace@microsemi.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10scsi: sd_zbc: Fix variable type and bogus commentDamien Le Moal
Fix the description of sd_zbc_check_zone_size() to correctly explain that the returned value is a number of device blocks, not bytes. Additionally, the 32 bits "ret" variable used in this function may truncate the 64 bits zone_blocks variable value upon return. To fix this, change "ret" type to s64. Fixes: ccce20fc79 ("sd_zbc: Avoid that resetting a zone fails sporadically") Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Cc: Bart Van Assche <bart.vanassche@wdc.com> Cc: stable@kernel.org Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10scsi: qla2xxx: Fix NULL pointer dereference for fcport searchChuck Anderson
Crash dump shows following instructions crash> bt PID: 0 TASK: ffffffffbe412480 CPU: 0 COMMAND: "swapper/0" #0 [ffff891ee0003868] machine_kexec at ffffffffbd063ef1 #1 [ffff891ee00038c8] __crash_kexec at ffffffffbd12b6f2 #2 [ffff891ee0003998] crash_kexec at ffffffffbd12c84c #3 [ffff891ee00039b8] oops_end at ffffffffbd030f0a #4 [ffff891ee00039e0] no_context at ffffffffbd074643 #5 [ffff891ee0003a40] __bad_area_nosemaphore at ffffffffbd07496e #6 [ffff891ee0003a90] bad_area_nosemaphore at ffffffffbd074a64 #7 [ffff891ee0003aa0] __do_page_fault at ffffffffbd074b0a #8 [ffff891ee0003b18] do_page_fault at ffffffffbd074fc8 #9 [ffff891ee0003b50] page_fault at ffffffffbda01925 [exception RIP: qlt_schedule_sess_for_deletion+15] RIP: ffffffffc02e526f RSP: ffff891ee0003c08 RFLAGS: 00010046 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffffc0307847 RDX: 00000000000020e6 RSI: ffff891edbc377c8 RDI: 0000000000000000 RBP: ffff891ee0003c18 R8: ffffffffc02f0b20 R9: 0000000000000250 R10: 0000000000000258 R11: 000000000000b780 R12: ffff891ed9b43000 R13: 00000000000000f0 R14: 0000000000000006 R15: ffff891edbc377c8 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #10 [ffff891ee0003c20] qla2x00_fcport_event_handler at ffffffffc02853d3 [qla2xxx] #11 [ffff891ee0003cf0] __dta_qla24xx_async_gnl_sp_done_333 at ffffffffc0285a1d [qla2xxx] #12 [ffff891ee0003de8] qla24xx_process_response_queue at ffffffffc02a2eb5 [qla2xxx] #13 [ffff891ee0003e88] qla24xx_msix_rsp_q at ffffffffc02a5403 [qla2xxx] #14 [ffff891ee0003ec0] __handle_irq_event_percpu at ffffffffbd0f4c59 #15 [ffff891ee0003f10] handle_irq_event_percpu at ffffffffbd0f4e02 #16 [ffff891ee0003f40] handle_irq_event at ffffffffbd0f4e90 #17 [ffff891ee0003f68] handle_edge_irq at ffffffffbd0f8984 #18 [ffff891ee0003f88] handle_irq at ffffffffbd0305d5 #19 [ffff891ee0003fb8] do_IRQ at ffffffffbda02a18 --- <IRQ stack> --- #20 [ffffffffbe403d30] ret_from_intr at ffffffffbda0094e [exception RIP: unknown or invalid address] RIP: 000000000000001f RSP: 0000000000000000 RFLAGS: fff3b8c2091ebb3f RAX: ffffbba5a0000200 RBX: 0000be8cdfa8f9fa RCX: 0000000000000018 RDX: 0000000000000101 RSI: 000000000000015d RDI: 0000000000000193 RBP: 0000000000000083 R8: ffffffffbe403e38 R9: 0000000000000002 R10: 0000000000000000 R11: ffffffffbe56b820 R12: ffff891ee001cf00 R13: ffffffffbd11c0a4 R14: ffffffffbe403d60 R15: 0000000000000001 ORIG_RAX: ffff891ee0022ac0 CS: 0000 SS: ffffffffffffffb9 bt: WARNING: possibly bogus exception frame #21 [ffffffffbe403dd8] cpuidle_enter_state at ffffffffbd67c6fd #22 [ffffffffbe403e40] cpuidle_enter at ffffffffbd67c907 #23 [ffffffffbe403e50] call_cpuidle at ffffffffbd0d98f3 #24 [ffffffffbe403e60] do_idle at ffffffffbd0d9b42 #25 [ffffffffbe403e98] cpu_startup_entry at ffffffffbd0d9da3 #26 [ffffffffbe403ec0] rest_init at ffffffffbd81d4aa #27 [ffffffffbe403ed0] start_kernel at ffffffffbe67d2ca #28 [ffffffffbe403f28] x86_64_start_reservations at ffffffffbe67c675 #29 [ffffffffbe403f38] x86_64_start_kernel at ffffffffbe67c6eb #30 [ffffffffbe403f50] secondary_startup_64 at ffffffffbd0000d5 Fixes: 040036bb0bc1 ("scsi: qla2xxx: Delay loop id allocation at login") Cc: <stable@vger.kernel.org> # v4.17+ Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10scsi: qla2xxx: Fix kernel crash due to late workqueue allocationhimanshu.madhani@cavium.com
This patch fixes crash for FCoE adapter. Once driver initialization is complete, firmware will start posting Asynchronous Event, However driver has not yet allocated workqueue to process and queue up work. This delay of allocating workqueue results into NULL pointer access. The following stack trace is seen: [ 24.577259] BUG: unable to handle kernel NULL pointer dereference at 0000000000000102 [ 24.623133] PGD 0 P4D 0 [ 24.636760] Oops: 0000 [#1] SMP NOPTI [ 24.656942] Modules linked in: i2c_algo_bit drm_kms_helper sr_mod(+) syscopyarea sysfillrect sysimgblt cdrom fb_sys_fops ata_generic ttm pata_acpi sd_mod ahci pata_atiixp sfc(+) qla2xxx(+) libahci drm qla4xxx(+) nvme_fc hpsa mdio libiscsi qlcnic(+) nvme_fabrics scsi_transport_sas serio_raw mtd crc32c_intel libata nvme_core i2c_core scsi_transport_iscsi tg3 scsi_transport_fc bnx2 iscsi_boot_sysfs dm_multipath dm_mirror dm_region_hash dm_log dm_mod [ 24.887449] CPU: 0 PID: 177 Comm: kworker/0:3 Not tainted 4.17.0-rc6 #1 [ 24.925119] Hardware name: HP ProLiant DL385 G7, BIOS A18 08/15/2012 [ 24.962106] Workqueue: events work_for_cpu_fn [ 24.987098] RIP: 0010:__queue_work+0x1f/0x3a0 [ 25.011672] RSP: 0018:ffff992642ceba10 EFLAGS: 00010082 [ 25.042116] RAX: 0000000000000082 RBX: 0000000000000082 RCX: 0000000000000000 [ 25.083293] RDX: ffff8cf9abc6d7d0 RSI: 0000000000000000 RDI: 0000000000002000 [ 25.123094] RBP: 0000000000000000 R08: 0000000000025a40 R09: ffff8cf9aade2880 [ 25.164087] R10: 0000000000000000 R11: ffff992642ceb6f0 R12: ffff8cf9abc6d7d0 [ 25.202280] R13: 0000000000002000 R14: ffff8cf9abc6d7b8 R15: 0000000000002000 [ 25.242050] FS: 0000000000000000(0000) f9b5c00000(0000) knlGS:0000000000000000 [ 25.977565] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 26.010457] CR2: 0000000000000102 CR3: 000000030760a000 CR4: 00000000000406f0 [ 26.051048] Call Trace: [ 26.063572] ? __switch_to_asm+0x34/0x70 [ 26.086079] queue_work_on+0x24/0x40 [ 26.107090] qla2x00_post_work+0x81/0xb0 [qla2xxx] [ 26.133356] qla2x00_async_event+0x1ad/0x1a20 [qla2xxx] [ 26.164075] ? lock_timer_base+0x67/0x80 [ 26.186420] ? try_to_del_timer_sync+0x4d/0x80 [ 26.212284] ? del_timer_sync+0x35/0x40 [ 26.234080] ? schedule_timeout+0x165/0x2f0 [ 26.259575] qla82xx_poll+0x13e/0x180 [qla2xxx] [ 26.285740] qla2x00_mailbox_command+0x74b/0xf50 [qla2xxx] [ 26.319040] qla82xx_set_driver_version+0x13b/0x1c0 [qla2xxx] [ 26.352108] ? qla2x00_init_rings+0x206/0x3f0 [qla2xxx] [ 26.381733] qla2x00_initialize_adapter+0x35c/0x7f0 [qla2xxx] [ 26.413240] qla2x00_probe_one+0x1479/0x2390 [qla2xxx] [ 26.442055] local_pci_probe+0x3f/0xa0 [ 26.463108] work_for_cpu_fn+0x10/0x20 [ 26.483295] process_one_work+0x152/0x350 [ 26.505730] worker_thread+0x1cf/0x3e0 [ 26.527090] kthread+0xf5/0x130 [ 26.545085] ? max_active_store+0x80/0x80 [ 26.568085] ? kthread_bind+0x10/0x10 [ 26.589533] ret_from_fork+0x22/0x40 [ 26.610192] Code: 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 57 41 89 ff 41 56 41 55 41 89 fd 41 54 49 89 d4 55 48 89 f5 53 48 83 ec 0 86 02 01 00 00 01 0f 85 80 02 00 00 49 c7 c6 c0 ec 01 00 41 [ 27.308540] RIP: __queue_work+0x1f/0x3a0 RSP: ffff992642ceba10 [ 27.341591] CR2: 0000000000000102 [ 27.360208] ---[ end trace 01b7b7ae2c005cf3 ]--- Cc: <stable@vger.kernel.org> # v4.17+ Fixes: 9b3e0f4d4147 ("scsi: qla2xxx: Move work element processing out of DPC thread" Reported-by: Li Wang <liwang@redhat.com> Tested-by: Li Wang <liwang@redhat.com> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10scsi: qla2xxx: Fix inconsistent DMA mem alloc/freeQuinn Tran
GPNFT command allocates 2 buffer for switch query. On completion, the same buffers were freed using different size, instead of using original size at the time of allocation. This patch saves the size of the request and response buffers and uses that to free them. Following stack trace can be seen when using debug kernel dump_stack+0x19/0x1b __warn+0xd8/0x100 warn_slowpath_fmt+0x5f/0x80 check_unmap+0xfb/0xa20 debug_dma_free_coherent+0x110/0x160 qla24xx_sp_unmap+0x131/0x1e0 [qla2xxx] qla24xx_async_gnnft_done+0xb6/0x550 [qla2xxx] qla2x00_do_work+0x1ec/0x9f0 [qla2xxx] Cc: <stable@vger.kernel.org> # v4.17+ Fixes: 33b28357dd00 ("scsi: qla2xxx: Fix Async GPN_FT for FCP and FC-NVMe scan") Reported-by: Ewan D. Milne <emilne@redhat.com> Signed-off-by: Quinn Tran <quinn.tran@cavium.com> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com> Signed-off-by: Himanshu Madhani <hmadhani@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2018-07-10Merge tag 'mips_fixes_4.18_3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: "A couple more MIPS fixes for 4.18: - Use async IPIs for arch_trigger_cpumask_backtrace() in order to avoid warnings & deadlocks, fixing a problem introduced in v3.19 with the fix trivial to backport as far as v4.9. - Fix ioremap()'s MMU/TLB backed path to avoid spuriously rejecting valid requests due to an incorrect belief that the memory region is backed by potentially-in-use RAM. This fixes a regression in v4.2" * tag 'mips_fixes_4.18_3' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: Fix ioremap() RAM check MIPS: Use async IPIs for arch_trigger_cpumask_backtrace() MIPS: Call dump_stack() from show_regs()
2018-07-10drm/amdgpu: Verify root PD is mapped into kernel address space (v4)Andrey Grodzovsky
Problem: When PD/PT update made by CPU root PD was not yet mapped causing page fault. Fix: Verify root PD is mapped into CPU address space. v2: Make sure that we add the root PD to the relocated list since then it's get mapped into CPU address space bt default in amdgpu_vm_update_directories. v3: Drop change to not move kernel type BOs to evicted list. v4: Remove redundant bo move to relocated list. Link: https://bugs.freedesktop.org/show_bug.cgi?id=107065 Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2018-07-10drm/amd/display: fix invalid function table overrideChristian König
Otherwise we try to program hardware with the wrong watermark functions when multiple DCE generations are installed in one system. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2018-07-10drm/amdgpu: Reserve VM root shared fence slot for command submission (v3)Michel Dänzer
Without this, there could not be enough slots, which could trigger the BUG_ON in reservation_object_add_shared_fence. v2: * Jump to the error label instead of returning directly (Jerry Zhang) v3: * Reserve slots for command submission after VM updates (Christian König) Cc: stable@vger.kernel.org Bugzilla: https://bugs.freedesktop.org/106418 Reported-by: mikhail.v.gavrilov@gmail.com Signed-off-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-07-10rseq/selftests: cleanup: Update comment above rseq_prepare_unloadMathieu Desnoyers
rseq as it was merged does not have rseq_finish_*() in the user-space selftests anymore. Update the rseq_prepare_unload() helper comment to adapt to this reality. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-api@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Watson <davejwatson@fb.com> Cc: Paul Turner <pjt@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Chris Lameter <cl@linux.com> Cc: Ben Maurer <bmaurer@fb.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Joel Fernandes <joelaf@google.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180709195155.7654-7-mathieu.desnoyers@efficios.com
2018-07-10rseq: Remove unused types_32_64.h uapi headerMathieu Desnoyers
This header was introduced in the 4.18 merge window, and rseq does not need it anymore. Nuke it before the final release. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-api@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Watson <davejwatson@fb.com> Cc: Paul Turner <pjt@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Chris Lameter <cl@linux.com> Cc: Ben Maurer <bmaurer@fb.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Joel Fernandes <joelaf@google.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180709195155.7654-6-mathieu.desnoyers@efficios.com
2018-07-10rseq: uapi: Declare rseq_cs field as union, update includesMathieu Desnoyers
Declaring the rseq_cs field as a union between __u64 and two __u32 allows both 32-bit and 64-bit kernels to read the full __u64, and therefore validate that a 32-bit user-space cleared the upper 32 bits, thus ensuring a consistent behavior between native 32-bit kernels and 32-bit compat tasks on 64-bit kernels. Check that the rseq_cs value read is < TASK_SIZE. The asm/byteorder.h header needs to be included by rseq.h, now that it is not using linux/types_32_64.h anymore. Considering that only __32 and __u64 types are declared in linux/rseq.h, the linux/types.h header should always be included for both kernel and user-space code: including stdint.h is just for u64 and u32, which are not used in this header at all. Use copy_from_user()/clear_user() to interact with a 64-bit field, because arm32 does not implement 64-bit __get_user, and ppc32 does not 64-bit get_user. Considering that the rseq_cs pointer does not need to be loaded/stored with single-copy atomicity from the kernel anymore, we can simply use copy_from_user()/clear_user(). Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-api@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Watson <davejwatson@fb.com> Cc: Paul Turner <pjt@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Chris Lameter <cl@linux.com> Cc: Ben Maurer <bmaurer@fb.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Joel Fernandes <joelaf@google.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180709195155.7654-5-mathieu.desnoyers@efficios.com
2018-07-10rseq: uapi: Update uapi commentsMathieu Desnoyers
Update rseq uapi header comments to reflect that user-space need to do thread-local loads/stores from/to the struct rseq fields. As a consequence of this added requirement, the kernel does not need to perform loads/stores with single-copy atomicity. Update the comment associated to the "flags" fields to describe more accurately that it's only useful to facilitate single-stepping through rseq critical sections with debuggers. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-api@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Watson <davejwatson@fb.com> Cc: Paul Turner <pjt@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Chris Lameter <cl@linux.com> Cc: Ben Maurer <bmaurer@fb.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Joel Fernandes <joelaf@google.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180709195155.7654-4-mathieu.desnoyers@efficios.com
2018-07-10rseq: Use get_user/put_user rather than __get_user/__put_userMathieu Desnoyers
__get_user()/__put_user() is used to read values for address ranges that were already checked with access_ok() on rseq registration. It has been recognized that __get_user/__put_user are optimizing the wrong thing. Replace them by get_user/put_user across rseq instead. If those end up showing up in benchmarks, the proper approach would be to use user_access_begin() / unsafe_{get,put}_user() / user_access_end() anyway. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-api@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Watson <davejwatson@fb.com> Cc: Paul Turner <pjt@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Chris Lameter <cl@linux.com> Cc: Ben Maurer <bmaurer@fb.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Joel Fernandes <joelaf@google.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lkml.kernel.org/r/20180709195155.7654-3-mathieu.desnoyers@efficios.com
2018-07-10rseq: Use __u64 for rseq_cs fields, validate user inputsMathieu Desnoyers
Change the rseq ABI so rseq_cs start_ip, post_commit_offset and abort_ip fields are seen as 64-bit fields by both 32-bit and 64-bit kernels rather that ignoring the 32 upper bits on 32-bit kernels. This ensures we have a consistent behavior for a 32-bit binary executed on 32-bit kernels and in compat mode on 64-bit kernels. Validating the value of abort_ip field to be below TASK_SIZE ensures the kernel don't return to an invalid address when returning to userspace after an abort. I don't fully trust each architecture code to consistently deal with invalid return addresses. Validating the value of the start_ip and post_commit_offset fields prevents overflow on arithmetic performed on those values, used to check whether abort_ip is within the rseq critical section. If validation fails, the process is killed with a segmentation fault. When the signature encountered before abort_ip does not match the expected signature, return -EINVAL rather than -EPERM to be consistent with other input validation return codes from rseq_get_rseq_cs(). Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-api@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Watson <davejwatson@fb.com> Cc: Paul Turner <pjt@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Chris Lameter <cl@linux.com> Cc: Ben Maurer <bmaurer@fb.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Joel Fernandes <joelaf@google.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180709195155.7654-2-mathieu.desnoyers@efficios.com
2018-07-10clocksource: arm_arch_timer: Set arch_mem_timer cpumask to cpu_possible_maskSudeep Holla
Currently, arch_mem_timer cpumask is set to cpu_all_mask which should be fine. However, cpu_possible_mask is more accurate and if there are other clockevent source in the system which are set to cpu_possible_mask, then having cpu_all_mask may result in issue. E.g. on a platform with arm,sp804 timer with rating 300 and cpu_possible_mask and this arch_mem_timer timer with rating 400 and cpu_all_mask, tick_check_preferred may choose both preferred as the cpumasks are not equal though they must be. This issue was root caused incorrectly initially and a fix was merged as commit 1332a9055801 ("tick: Prefer a lower rating device only if it's CPU local device"). Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kevin Hilman <khilman@baylibre.com> Tested-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/1531151136-18297-2-git-send-email-sudeep.holla@arm.com
2018-07-10Revert "tick: Prefer a lower rating device only if it's CPU local device"Sudeep Holla
This reverts commit 1332a90558013ae4242e3dd7934bdcdeafb06c0d. The original issue was not because of incorrect checking of cpumask for both new and old tick device. It was incorrectly analysed was due to the misunderstanding of the comment and misinterpretation of the return value from tick_check_preferred. The main issue is with the clockevent driver that sets the cpumask to cpu_all_mask instead of cpu_possible_mask. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kevin Hilman <khilman@baylibre.com> Tested-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/1531151136-18297-1-git-send-email-sudeep.holla@arm.com
2018-07-10Revert "drm/amd/display: Don't return ddc result and read_bytes in same ↵Alex Deucher
return value" This reverts commit 018d82e5f02ef3583411bcaa4e00c69786f46f19. This breaks DDC in certain cases. Revert for 4.18 and previous kernels. For 4.19, this is fixed with the following more extensive patches: drm/amd/display: Serialize is_dp_sink_present drm/amd/display: Break out function to simply read aux reply drm/amd/display: Return aux replies directly to DRM drm/amd/display: Right shift AUX reply value sooner than later drm/amd/display: Read AUX channel even if only status byte is returned Link: https://lists.freedesktop.org/archives/amd-gfx/2018-July/023788.html Acked-by: Harry Wentland <harry.wentland@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2018-07-10Merge tag 'drm-fixes-2018-07-10' of git://anongit.freedesktop.org/drm/drmLinus Torvalds
Pull drm fixes from Dave Airlie: "This just contains some etnaviv fixes and a MAINTAINERS update for the new drm tree locations" * tag 'drm-fixes-2018-07-10' of git://anongit.freedesktop.org/drm/drm: MAINTAINERS: update drm tree drm/etnaviv: bring back progress check in job timeout handler drm/etnaviv: Fix driver unregistering drm/etnaviv: Check for platform_device_register_simple() failure
2018-07-10driver core: Partially revert "driver core: correct device's shutdown order"Rafael J. Wysocki
Commit 52cdbdd49853 (driver core: correct device's shutdown order) introduced a regression by breaking device shutdown on some systems. Namely, the devices_kset_move_last() call in really_probe() added by that commit is a mistake as it may cause parents to follow children in the devices_kset list which then causes shutdown to fail. For example, if a device has children before really_probe() is called for it (which is not uncommon), that call will cause it to be reordered after the children in the devices_kset list and the ordering of that list will not reflect the correct device shutdown order any more. Also it causes the devices_kset list to be constantly reordered until all drivers have been probed which is totally pointless overhead in the majority of cases and it only covered an issue with system shutdown, while system-wide suspend/resume potentially had the same issue on the affected platforms (which was not covered). Moreover, the shutdown issue originally addressed by the change in really_probe() made by commit 52cdbdd49853 is not present in 4.18-rc any more, since dra7 started to use the sdhci-omap driver which doesn't disable any regulators during shutdown, so the really_probe() part of commit 52cdbdd49853 can be safely reverted. [The original issue was related to the omap_hsmmc driver used by dra7 previously.] For the above reasons, revert the really_probe() modifications made by commit 52cdbdd49853. The other code changes made by commit 52cdbdd49853 are useful and they need not be reverted. Fixes: 52cdbdd49853 (driver core: correct device's shutdown order) Link: https://lore.kernel.org/lkml/CAFgQCTt7VfqM=UyCnvNFxrSw8Z6cUtAi3HUwR4_xPAc03SgHjQ@mail.gmail.com/ Reported-by: Pingfan Liu <kernelfans@gmail.com> Tested-by: Pingfan Liu <kernelfans@gmail.com> Reviewed-by: Kishon Vijay Abraham I <kishon@ti.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-10bpf: fix ldx in ld_abs rewrite for large offsetsDaniel Borkmann
Mark reported that syzkaller triggered a KASAN detected slab-out-of-bounds bug in ___bpf_prog_run() with a BPF_LD | BPF_ABS word load at offset 0x8001. After further investigation it became clear that the issue was the BPF_LDX_MEM() which takes offset as an argument whereas it cannot encode larger than S16_MAX offsets into it. For this synthetical case we need to move the full address into tmp register instead and do the LDX without immediate value. Fixes: e0cea7ce988c ("bpf: implement ld_abs/ld_ind in native bpf") Reported-by: syzbot <syzkaller@googlegroups.com> Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-07-10Revert "arm64: Use aarch64elf and aarch64elfb emulation mode variants"Laura Abbott
This reverts commit 38fc4248677552ce35efc09902fdcb06b61d7ef9. Distributions such as Fedora and Debian do not package the ELF linker scripts with their toolchains, resulting in kernel build failures such as: | CHK include/generated/compile.h | LD [M] arch/arm64/crypto/sha512-ce.o | aarch64-linux-gnu-ld: cannot open linker script file ldscripts/aarch64elf.xr: No such file or directory | make[1]: *** [scripts/Makefile.build:530: arch/arm64/crypto/sha512-ce.o] Error 1 | make: *** [Makefile:1029: arch/arm64/crypto] Error 2 Revert back to the linux targets for now, adding a comment to the Makefile so we don't accidentally break this in the future. Cc: Paul Kocialkowski <contact@paulk.fr> Cc: <stable@vger.kernel.org> Fixes: 38fc42486775 ("arm64: Use aarch64elf and aarch64elfb emulation mode variants") Tested-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-07-10samples/bpf: Fix tc and ip paths in xdp2skb_meta.shTaeung Song
The below path error can occur: # ./xdp2skb_meta.sh --dev eth0 --list ./xdp2skb_meta.sh: line 61: /usr/sbin/tc: No such file or directory So just use command names instead of absolute paths of tc and ip. In addition, it allow callers to redefine $TC and $IP paths Fixes: 36e04a2d78d9 ("samples/bpf: xdp2skb_meta shows transferring info from XDP to SKB") Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Taeung Song <treeze.taeung@gmail.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-10ext4: fix inline data updates with checksums enabledTheodore Ts'o
The inline data code was updating the raw inode directly; this is problematic since if metadata checksums are enabled, ext4_mark_inode_dirty() must be called to update the inode's checksum. In addition, the jbd2 layer requires that get_write_access() be called before the metadata buffer is modified. Fix both of these problems. https://bugzilla.kernel.org/show_bug.cgi?id=200443 Signed-off-by: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org
2018-07-10char: amd64-agp: Use 64-bit arithmetic instead of 32-bitGustavo A. R. Silva
Cast *tmp* and *nb_base* to u64 in order to give the compiler complete information about the proper arithmetic to use. Notice that such variables are used in contexts that expect expressions of type u64 (64 bits, unsigned) and the following expressions are currently being evaluated using 32-bit arithmetic: tmp << 25 nb_base << 25 Addresses-Coverity-ID: 200586 ("Unintentional integer overflow") Addresses-Coverity-ID: 200587 ("Unintentional integer overflow") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2018-07-10char: agp: Change return type to vm_fault_tSouptick Joarder
Use new return type vm_fault_t for fault handler. For now, this is just documenting that the function returns a VM_FAULT value rather than an errno. Once all instances are converted, vm_fault_t will become a distinct type. Ref-> commit 1c8f422059ae ("mm: change return type to vm_fault_t") was added in 4.17-rc1 to introduce the new typedef vm_fault_t. Currently we are making change to all drivers to return vm_fault_t for page fault handlers. As part of that char/agp driver is also getting changed to return vm_fault_t type from fault handler. Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com> Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2018-07-10MAINTAINERS: update drm treeDaniel Vetter
Mail to dri-devel went out, linux-next was updated, but we forgot this one here. Cc: David Airlie <airlied@linux.ie> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180706072842.9009-1-daniel.vetter@ffwll.ch
2018-07-10Merge branch 'etnaviv/fixes' of https://git.pengutronix.de/git/lst/linux ↵Dave Airlie
into drm-fixes Lucas wrote: "a couple of small fixes: - 2 patches from Fabio to fix module reloading - one patch to fix a userspace visible regression, where the job timeout is a bit too eager and kills legitimate jobs" Signed-off-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/1530868450.15725.8.camel@pengutronix.de
2018-07-09Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid Pull HID fixes from Jiri Kosina: - spectrev1 pattern fix in hiddev from Gustavo A. R. Silva - bounds check fix for hid-debug from Daniel Rosenberg - regression fix for HID autobinding from Benjamin Tissoires - removal of excessive logging from i2c-hid driver from Jason Andryuk - fix specific to 2nd generation of Wacom Intuos devices from Jason Gerecke * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid: HID: hiddev: fix potential Spectre v1 HID: i2c-hid: Fix "incomplete report" noise HID: wacom: Correct touch maximum XY of 2nd-gen Intuos HID: debug: check length before copy_to_user() HID: core: allow concurrent registration of drivers
2018-07-09Update TDA998x maintainer entryRussell King - ARM Linux
Update my TDA998x HDMI encoder MAINTAINERS entry to include the dt-bindings header, and a keyword pattern to catch patches containing the DT compatible. Also change the status to "maintained" rather than "supported". Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-07-09net: Use __u32 in uapi net_stamp.hJesus Sanchez-Palencia
We are not supposed to use u32 in uapi, so change the flags member of struct sock_txtime from u32 to __u32 instead. Fixes: 80b14dee2bea ("net: Add a new socket option for a future transmit time") Reported-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jesus Sanchez-Palencia <jesus.sanchez-palencia@intel.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09rhashtable: add restart routine in rhashtable_free_and_destroy()Taehee Yoo
rhashtable_free_and_destroy() cancels re-hash deferred work then walks and destroys elements. at this moment, some elements can be still in future_tbl. that elements are not destroyed. test case: nft_rhash_destroy() calls rhashtable_free_and_destroy() to destroy all elements of sets before destroying sets and chains. But rhashtable_free_and_destroy() doesn't destroy elements of future_tbl. so that splat occurred. test script: %cat test.nft table ip aa { map map1 { type ipv4_addr : verdict; elements = { 0 : jump a0, 1 : jump a0, 2 : jump a0, 3 : jump a0, 4 : jump a0, 5 : jump a0, 6 : jump a0, 7 : jump a0, 8 : jump a0, 9 : jump a0, } } chain a0 { } } flush ruleset table ip aa { map map1 { type ipv4_addr : verdict; elements = { 0 : jump a0, 1 : jump a0, 2 : jump a0, 3 : jump a0, 4 : jump a0, 5 : jump a0, 6 : jump a0, 7 : jump a0, 8 : jump a0, 9 : jump a0, } } chain a0 { } } flush ruleset %while :; do nft -f test.nft; done Splat looks like: [ 200.795603] kernel BUG at net/netfilter/nf_tables_api.c:1363! [ 200.806944] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI [ 200.812253] CPU: 1 PID: 1582 Comm: nft Not tainted 4.17.0+ #24 [ 200.820297] Hardware name: To be filled by O.E.M. To be filled by O.E.M./Aptio CRB, BIOS 5.6.5 07/08/2015 [ 200.830309] RIP: 0010:nf_tables_chain_destroy.isra.34+0x62/0x240 [nf_tables] [ 200.838317] Code: 43 50 85 c0 74 26 48 8b 45 00 48 8b 4d 08 ba 54 05 00 00 48 c7 c6 60 6d 29 c0 48 c7 c7 c0 65 29 c0 4c 8b 40 08 e8 58 e5 fd f8 <0f> 0b 48 89 da 48 b8 00 00 00 00 00 fc ff [ 200.860366] RSP: 0000:ffff880118dbf4d0 EFLAGS: 00010282 [ 200.866354] RAX: 0000000000000061 RBX: ffff88010cdeaf08 RCX: 0000000000000000 [ 200.874355] RDX: 0000000000000061 RSI: 0000000000000008 RDI: ffffed00231b7e90 [ 200.882361] RBP: ffff880118dbf4e8 R08: ffffed002373bcfb R09: ffffed002373bcfa [ 200.890354] R10: 0000000000000000 R11: ffffed002373bcfb R12: dead000000000200 [ 200.898356] R13: dead000000000100 R14: ffffffffbb62af38 R15: dffffc0000000000 [ 200.906354] FS: 00007fefc31fd700(0000) GS:ffff88011b800000(0000) knlGS:0000000000000000 [ 200.915533] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 200.922355] CR2: 0000557f1c8e9128 CR3: 0000000106880000 CR4: 00000000001006e0 [ 200.930353] Call Trace: [ 200.932351] ? nf_tables_commit+0x26f6/0x2c60 [nf_tables] [ 200.939525] ? nf_tables_setelem_notify.constprop.49+0x1a0/0x1a0 [nf_tables] [ 200.947525] ? nf_tables_delchain+0x6e0/0x6e0 [nf_tables] [ 200.952383] ? nft_add_set_elem+0x1700/0x1700 [nf_tables] [ 200.959532] ? nla_parse+0xab/0x230 [ 200.963529] ? nfnetlink_rcv_batch+0xd06/0x10d0 [nfnetlink] [ 200.968384] ? nfnetlink_net_init+0x130/0x130 [nfnetlink] [ 200.975525] ? debug_show_all_locks+0x290/0x290 [ 200.980363] ? debug_show_all_locks+0x290/0x290 [ 200.986356] ? sched_clock_cpu+0x132/0x170 [ 200.990352] ? find_held_lock+0x39/0x1b0 [ 200.994355] ? sched_clock_local+0x10d/0x130 [ 200.999531] ? memset+0x1f/0x40 V2: - free all tables requested by Herbert Xu Signed-off-by: Taehee Yoo <ap420073@gmail.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09Merge branch 'bnxt_en-Bug-fixes'David S. Miller
Michael Chan says: ==================== bnxt_en: Bug fixes. These are bug fixes in error code paths, TC Flower VLAN TCI flow checking bug fix, proper filtering of Broadcast packets if IFF_BROADCAST is not set, and a bug fix in bnxt_get_max_rings() to return 0 ring parameters when the return value is -ENOMEM. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09bnxt_en: Fix for system hang if request_irq failsVikas Gupta
Fix bug in the error code path when bnxt_request_irq() returns failure. bnxt_disable_napi() should not be called in this error path because NAPI has not been enabled yet. Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.") Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09bnxt_en: Do not modify max IRQ count after RDMA driver requests/frees IRQs.Michael Chan
Calling bnxt_set_max_func_irqs() to modify the max IRQ count requested or freed by the RDMA driver is flawed. The max IRQ count is checked when re-initializing the IRQ vectors and this can happen multiple times during ifup or ethtool -L. If the max IRQ is reduced and the RDMA driver is operational, we may not initailize IRQs correctly. This problem shows up on VFs with very small number of MSIX. There is no other logic that relies on the IRQ count excluding the ones used by RDMA. So we fix it by just removing the call to subtract or add the IRQs used by RDMA. Fixes: a588e4580a7e ("bnxt_en: Add interface to support RDMA driver.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09bnxt_en: Support clearing of the IFF_BROADCAST flag.Michael Chan
Currently, the driver assumes IFF_BROADCAST is always set and always sets the broadcast filter. Modify the code to set or clear the broadcast filter according to the IFF_BROADCAST flag. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09bnxt_en: Always set output parameters in bnxt_get_max_rings().Michael Chan
The current code returns -ENOMEM and does not bother to set the output parameters to 0 when no rings are available. Some callers, such as bnxt_get_channels() will display garbage ring numbers when that happens. Fix it by always setting the output parameters. Fixes: 6e6c5a57fbe1 ("bnxt_en: Modify bnxt_get_max_rings() to support shared or non shared rings.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09bnxt_en: Fix inconsistent BNXT_FLAG_AGG_RINGS logic.Michael Chan
If there aren't enough RX rings available, the driver will attempt to use a single RX ring without the aggregation ring. If that also fails, the BNXT_FLAG_AGG_RINGS flag is cleared but the other ring parameters are not set consistently to reflect that. If more RX rings become available at the next open, the RX rings will be in an inconsistent state and may crash when freeing the RX rings. Fix it by restoring the BNXT_FLAG_AGG_RINGS if not enough RX rings are available to run without aggregation rings. Fixes: bdbd1eb59c56 ("bnxt_en: Handle no aggregation ring gracefully.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-09bnxt_en: Fix the vlan_tci exact match check.Venkat Duvvuru
It is possible that OVS may set don’t care for DEI/CFI bit in vlan_tci mask. Hence, checking for vlan_tci exact match will endup in a vlan flow rejection. This patch fixes the problem by checking for vlan_pcp and vid separately, instead of checking for the entire vlan_tci. Fixes: e85a9be93cf1 (bnxt_en: do not allow wildcard matches for L2 flows) Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>