Age | Commit message (Collapse) | Author |
|
ceq and aeq is a ring buffer, consumer index of them will be set to zero
after reaching the maximum value. The warning should be removed or it may
mislead the users.
Link: https://lore.kernel.org/r/1584674622-52773-8-git-send-email-liweihang@huawei.com
Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
The struct hns_roce_v2_cq_db is unused, it should be removed.
Link: https://lore.kernel.org/r/1584674622-52773-7-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Interchange SQD and SQE to match the protocol.
Link: https://lore.kernel.org/r/1584674622-52773-6-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
The capbilities of hardware should be got at first and then used in
hns_roce_alloc_vf_resource(). Also removes an unnecessary if ... else
condition in it.
Link: https://lore.kernel.org/r/1584674622-52773-5-git-send-email-liweihang@huawei.com
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Combine attribute flags before masking them.
Link: https://lore.kernel.org/r/1584674622-52773-4-git-send-email-liweihang@huawei.com
Signed-off-by: Lang Cheng <chenglang@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
hns_roce_alloc_mtt_range() never return -1, ret should be checked
whether it is zero instead of -1.
Fixes: 1ceb0b11a8a2 ("RDMA/hns: Fix non-standard error codes")
Link: https://lore.kernel.org/r/1584674622-52773-3-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Use ibdev_err/dbg/warn() instead of dev_err/dbg/warn(), and modify some
prints into format of "failed to do something, ret = n".
Link: https://lore.kernel.org/r/1584674622-52773-2-git-send-email-liweihang@huawei.com
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
libiscsi calls the check_protection transport handler only if SCSI-Respose
is received. So, the handler is never called if iSCSI task is completed
for some other reason like a timeout or error handling. And this behavior
looks correct. But the iSER does not handle this case properly because it
puts a non-checked signature MR to the free pool. Then the error occurs at
reusing the MR because it is not allowed to invalidate a signature MR
without checking.
This commit adds an extra check to iser_unreg_mem_fastreg(), which is a
part of the task cleanup flow. Now the signature MR is checked there if it
is needed.
Link: https://lore.kernel.org/r/20200325151210.1548-1-sergeygo@mellanox.com
Signed-off-by: Sergey Gorenko <sergeygo@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
The RXE driver doesn't set sys_image_guid and user space applications see
zeros. This causes to pyverbs tests to fail with the following traceback,
because the IBTA spec requires to have valid sys_image_guid.
Traceback (most recent call last):
File "./tests/test_device.py", line 51, in test_query_device
self.verify_device_attr(attr)
File "./tests/test_device.py", line 74, in verify_device_attr
assert attr.sys_image_guid != 0
In order to fix it, set sys_image_guid to be equal to node_guid.
Before:
5: rxe0: ... node_guid 5054:00ff:feaa:5363 sys_image_guid
0000:0000:0000:0000
After:
5: rxe0: ... node_guid 5054:00ff:feaa:5363 sys_image_guid
5054:00ff:feaa:5363
Fixes: 8700e3e7c485 ("Soft RoCE driver")
Link: https://lore.kernel.org/r/20200323112800.1444784-1-leon@kernel.org
Signed-off-by: Zhu Yanjun <yanjunz@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
The kconfig select causes build failues for SOC_VIRT config becaus
we are selecting lot of VIRTIO drivers without selecting all required
dependencies.
Better approach is to only select essential drivers from SOC_VIRT
config option and enable required VIRTIO drivers using defconfigs.
Fixes: 759bdc168181 ("RISC-V: Add kconfig option for QEMU virt machine")
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
|
|
The initial policy value set by intel_pstate_cpu_init() depends on
whether or not CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is set, but
that is not necessary, because the core will set the policy to
"performance" in cpufreq_init_policy() if the default governor is
"performance" anyway.
Accordingly, change intel_pstate_cpu_init() to always set policy
to CPUFREQ_POLICY_POWERSAVE initially to provide a valid fallback
value to cpufreq_init_policy() in case the default cpufreq governor
is neither "powersave" nor "performance".
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
commit 97a32539b956 ("proc: convert everything to "struct proc_ops"")
forget do this convering for prism2_download_aux_dump_proc_fops.
Fixes: 97a32539b956 ("proc: convert everything to "struct proc_ops"")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20200326032432.20384-1-yuehaibing@huawei.com
|
|
In previous setting, management packets' sequence numbers will
not increase and always stay at 0. Add hw sequence number support
for mgmt packets.
The table below shows different sequence number setting in the
tx descriptor.
seq num ctrl | EN_HWSEQ | DISQSELSEL | HW_SSN_SEL
------------------------------------------------------
sw ctrl | 0 | N/A | N/A
hw ctrl per MACID | 1 | 0 | N/A
hw ctrl per HWREG | 1 | 1 |HWREG(0/1/2/3)
Signed-off-by: Tzu-En Huang <tehuang@realtek.com>
Signed-off-by: Yan-Hsuan Chuang <yhchuang@realtek.com>
Reviewed-by: Brian Norris <briannorris@chromium.org>
Tested-by: Brian Norris <briannorris@chromium.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Link: https://lore.kernel.org/r/20200326020408.25218-1-yhchuang@realtek.com
|
|
A fixed y-axis scale was missed during a change to autoscale.
Correct it.
Fixes: 709bd70d070ee ("tools/power/x86/intel_pstate_tracer: change several graphs to autoscale y-axis")
Signed-off-by: Doug Smythies <dsmythies@telus.net>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211
Johannes Berg says:
====================
We have the following fixes:
* drop data packets if there's no key for them anymore, after
there had been one, to avoid sending them in clear when
hostapd removes the key before it removes the station and
the packets are still queued
* check port authorization again after dequeue, to avoid
sending packets if the station is no longer authorized
* actually remove the authorization flag before the key so
packets are also dropped properly because of this
* fix nl80211 control port packet tagging to handle them as
packets allowed to go out without encryption
* fix NL80211_ATTR_CHANNEL_WIDTH outgoing netlink attribute
width (should be 32 bits, not 8)
* don't WARN in a CSA scenario that happens on some APs
* fix HE spatial reuse element size calculation
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
finally
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
regularizes things a bit
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
list_for_each_entry_from_reverse() iterates backwards over the list from
the current position, but in the error path we should start from the
previous position.
Fix this by using list_for_each_entry_continue_reverse() instead.
This suppresses the following error from coccinelle:
drivers/net/ethernet/mellanox/mlxsw//spectrum_mr.c:655:34-38: ERROR:
invalid reference to the index variable of the iterator on line 636
Fixes: c011ec1bbfd6 ("mlxsw: spectrum: Add the multicast routing offloading logic")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
reorder copy_siginfo_to_user() calls a bit
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Similar to ia32_setup_sigcontext() change several commits ago, make it
__always_inline. In cases when there is a user_access_{begin,end}()
section nearby, just move the call over there.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Ido Schimmel says:
====================
mlxsw: Offload TC action pedit munge dsfield
Petr says:
The Spectrum switches allow packet prioritization based on DSCP on ingress,
and update of DSCP on egress. This is configured through the DCB APP rules.
For some use cases, assigning a custom DSCP value based on an ACL match is
a better tool. To that end, offload FLOW_ACTION_MANGLE to permit changing
of dsfield as a whole, or DSCP and ECN values in isolation.
After fixing a commentary nit in patch #1, and mlxsw naming in patch #2,
patches #3 and #4 add the offload to mlxsw.
Patch #5 adds a forwarding selftest for pedit dsfield, applicable to SW as
well as HW datapaths. Patch #6 adds a mlxsw-specific test to verify DSCP
rewrite due to DCB APP rules is not performed on pedited packets.
The tests only cover IPv4 dsfield setting. We have tests for IPv6 as well,
but would like to postpone their contribution until the corresponding
iproute patches have been accepted.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When DSCP is updated through an offloaded pedit action, DSCP rewrite on
egress should be disabled. Add a test that check that it is so.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a test that runs packets with dsfield set, and test that pedit adjusts
the DSCP or ECN parts or the whole field.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Offload action pedit ex munge when used with a flower classifier. Only
allow setting of DSCP, ECN, or the whole DSField in IPv4 and IPv6 packets.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The QOS_ACTION is used for manipulating the QOS attributes of the packet.
Add the defines and helpers related to DSCP and ECN fields, and dscp_rw.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The original idea was to reuse this set of actions for ECN rewrite as well,
but on second look, it's not such a great idea. These two items should each
have its own command. Rename the existing enum to make it obvious that it
belongs to switch_prio_cmd.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This field references FLOW_ACTION_PACKET_EDIT. Such action does not exist
though. Instead the field is used for FLOW_ACTION_MANGLE and _ADD.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In qlcnic_83xx_get_reset_instruction_template, the variable
of null test is bad, so correct it.
Signed-off-by: Xu Wang <vulab@iscas.ac.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Straightforward, except for save_altstack_ex() stuck in those.
Replace that thing with an analogue that would use unsafe_put_user()
instead of put_user_ex() (called compat_save_altstack()) and be done
with that.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
__copy_siginfo_to_user32() call reordered a bit. The rest folds
nicely.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Currently we have user_access block, followed by __put_user(),
deciding what the restorer will be and finally a put_user_try
block.
Moving the calculation of restorer first allows the rest
(actual copyout work) to coalesce into a single user_access block.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2020-03-25
1) Cleanups from Dan Carpenter and wenxu.
2) Paul and Roi, Some minor updates and fixes to E-Switch to address
issues introduced in the previous reg_c0 updates series.
3) Eli Cohen simplifies and improves flow steering matching group searches
and flow table entries version management.
4) Parav Pandit, improves devlink eswitch mode changes thread safety.
By making devlink rely on driver for thread safety and introducing mlx5
eswitch mode change protection.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
What's left is just a sequence of stores to userland addresses, with all
error handling, etc. done out of line. Calling that from user_access block
is safe, but rather than teaching objtool to recognize it as such we can
just make it always_inline - it is small enough and has few enough callers,
for the space savings not to be an issue.
Rename the sucker to __unsafe_setup_sigcontext32() and provide
unsafe_put_sigcontext32() with usual kind of semantics.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
drivers/net/ethernet/atheros/atlx/atl2.c:40:19: warning: ‘atl2_driver_string’ defined but not used [-Wunused-const-variable=]
static const char atl2_driver_string[] = "Atheros(R) L2 Ethernet Driver";
^~~~~~~~~~~~~~~~~~
commit ea973742140b ("net/atheros: Clean atheros code from driver version")
left behind this, remove it.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In the commit f73b12812a3d
("tipc: improve throughput between nodes in netns"), we're missing a check
to handle TIPC_DIRECT_MSG type, it's still using old sending mechanism for
this message type. So, throughput improvement is not significant as
expected.
Besides that, when sending a large message with that type, we're also
handle wrong receiving queue, it should be enqueued in socket receiving
instead of multicast messages.
Fix this by adding the missing case for TIPC_DIRECT_MSG.
Fixes: f73b12812a3d ("tipc: improve throughput between nodes in netns")
Reported-by: Tuong Lien <tuong.t.lien@dektech.com.au>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Acked-by: Jon Maloy <jmaloy@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since snprintf() returns the would-be-output size instead of the actual
output size, the succeeding calls may go beyond the given buffer limit.
Fix it by replacing with scnprintf().
Link: https://lore.kernel.org/r/20200319154641.23711-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
After a successful allocation of path_rec, num_paths is set to 1, but any
error after such allocation will leave num_paths uncleared.
This causes to de-referencing a NULL pointer later on. Hence, num_paths
needs to be set back to 0 if such an error occurs.
The following crash from syzkaller revealed it.
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI
CPU: 0 PID: 357 Comm: syz-executor060 Not tainted 4.18.0+ #311
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
RIP: 0010:ib_copy_path_rec_to_user+0x94/0x3e0
Code: f1 f1 f1 f1 c7 40 0c 00 00 f4 f4 65 48 8b 04 25 28 00 00 00 48 89
45 c8 31 c0 e8 d7 60 24 ff 48 8d 7b 4c 48 89 f8 48 c1 e8 03 <42> 0f b6
14 30 48 89 f8 83 e0 07 83 c0 03 38 d0 7c 08 84 d2 0f 85
RSP: 0018:ffff88006586f980 EFLAGS: 00010207
RAX: 0000000000000009 RBX: 0000000000000000 RCX: 1ffff1000d5fe475
RDX: ffff8800621e17c0 RSI: ffffffff820d45f9 RDI: 000000000000004c
RBP: ffff88006586fa50 R08: ffffed000cb0df73 R09: ffffed000cb0df72
R10: ffff88006586fa70 R11: ffffed000cb0df73 R12: 1ffff1000cb0df30
R13: ffff88006586fae8 R14: dffffc0000000000 R15: ffff88006aff2200
FS: 00000000016fc880(0000) GS:ffff88006d000000(0000)
knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020000040 CR3: 0000000063fec000 CR4: 00000000000006b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
? ib_copy_path_rec_from_user+0xcc0/0xcc0
? __mutex_unlock_slowpath+0xfc/0x670
? wait_for_completion+0x3b0/0x3b0
? ucma_query_route+0x818/0xc60
ucma_query_route+0x818/0xc60
? ucma_listen+0x1b0/0x1b0
? sched_clock_cpu+0x18/0x1d0
? sched_clock_cpu+0x18/0x1d0
? ucma_listen+0x1b0/0x1b0
? ucma_write+0x292/0x460
ucma_write+0x292/0x460
? ucma_close_id+0x60/0x60
? sched_clock_cpu+0x18/0x1d0
? sched_clock_cpu+0x18/0x1d0
__vfs_write+0xf7/0x620
? ucma_close_id+0x60/0x60
? kernel_read+0x110/0x110
? time_hardirqs_on+0x19/0x580
? lock_acquire+0x18b/0x3a0
? finish_task_switch+0xf3/0x5d0
? _raw_spin_unlock_irq+0x29/0x40
? _raw_spin_unlock_irq+0x29/0x40
? finish_task_switch+0x1be/0x5d0
? __switch_to_asm+0x34/0x70
? __switch_to_asm+0x40/0x70
? security_file_permission+0x172/0x1e0
vfs_write+0x192/0x460
ksys_write+0xc6/0x1a0
? __ia32_sys_read+0xb0/0xb0
? entry_SYSCALL_64_after_hwframe+0x3e/0xbe
? do_syscall_64+0x1d/0x470
do_syscall_64+0x9e/0x470
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Fixes: 3c86aa70bf67 ("RDMA/cm: Add RDMA CM support for IBoE devices")
Link: https://lore.kernel.org/r/20200318101741.47211-1-leon@kernel.org
Signed-off-by: Avihai Horon <avihaih@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Pull rdma fixes from Jason Gunthorpe:
"A small set of late-rc patches, mostly fixes for various crashers,
some syzkaller fixes and a mlx5 HW limitation:
- Several MAINTAINERS updates
- Memory leak regression in ODP
- Several fixes for syzkaller related crashes. Google recently taught
syzkaller to create the software RDMA devices
- Crash fixes for HFI1
- Several fixes for mlx5 crashes
- Prevent unprivileged access to an unsafe mlx5 HW resource"
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
RDMA/mlx5: Block delay drop to unprivileged users
RDMA/mlx5: Fix access to wrong pointer while performing flush due to error
RDMA/core: Ensure security pkey modify is not lost
MAINTAINERS: Clean RXE section and add Zhu as RXE maintainer
IB/hfi1: Ensure pq is not left on waitlist
IB/rdmavt: Free kernel completion queue when done
RDMA/mad: Do not crash if the rdma device does not have a umad interface
RDMA/core: Fix missing error check on dev_set_name()
RDMA/nl: Do not permit empty devices names during RDMA_NLDEV_CMD_NEWLINK/SET
RDMA/mlx5: Fix the number of hwcounters of a dynamic counter
MAINTAINERS: Update maintainers for HISILICON ROCE DRIVER
RDMA/odp: Fix leaking the tgid for implicit ODP
|
|
hmm_range_fault() will succeed for any kind of device private memory, even
if it doesn't belong to the calling entity. While nouveau has some crude
checks for that, they are broken because they assume nouveau is the only
user of device private memory. Fix this by passing in an expected pgmap
owner in the hmm_range_fault structure.
If a device_private page is found and doesn't match the owner then it is
treated as an non-present and non-faultable page.
This prevents a bug in amdgpu, where it doesn't know how to handle
device_private pages, but hmm_range_fault would return them anyhow.
Fixes: 4ef589dc9b10 ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Link: https://lore.kernel.org/r/20200316193216.920734-5-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Remove the HMM_PFN_DEVICE_PRIVATE flag, no driver has ever set this flag
on input, and the only place that uses it on output can be trivially
changed to use is_device_private_page().
This removes the ability to request that device_private pages are faulted
back into system memory.
Link: https://lore.kernel.org/r/20200316193216.920734-4-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Add a new src_owner field to struct migrate_vma. If the field is set,
only device private pages with page->pgmap->owner equal to that field are
migrated. If the field is not set only "normal" pages are migrated.
Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")
Link: https://lore.kernel.org/r/20200316193216.920734-3-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Add a new opaque owner field to struct dev_pagemap, which will allow the
hmm and migrate_vma code to identify who owns ZONE_DEVICE memory, and
refuse to work on mappings not owned by the calling entity.
Link: https://lore.kernel.org/r/20200316193216.920734-2-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Tested-by: Bharata B Rao <bharata@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
There is no good reason for this split, as it just obsfucates the flow.
Link: https://lore.kernel.org/r/20200316135310.899364-6-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Setting a pfns entry to NONE before returning -EBUSY is a bug that will
cause corruption of the input flags on the next loop.
There is just a single caller using hmm_vma_walk_hole_() for the non-fault
case. Use hmm_pfns_fill() to fill the whole pfn array with zeroes in the
only caller for the non-fault case and remove the non-fault path from
hmm_vma_walk_hole_(). This avoids setting NONE before returning -EBUSY.
Also rename the function to hmm_vma_fault() to better describe what it
does.
Fixes: 2aee09d8c116 ("mm/hmm: change hmm_vma_fault() to allow write fault on page basis")
Link: https://lore.kernel.org/r/20200316135310.899364-5-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Remove the rather confusing goto label and just handle the fault case
directly in the branch checking for it.
Link: https://lore.kernel.org/r/20200316135310.899364-4-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
The HMM_FAULT_ALLOW_RETRY isn't used anywhere in the tree. Remove it and
the weird -EAGAIN handling where handle_mm_fault() drops the mmap_sem.
Link: https://lore.kernel.org/r/20200316135310.899364-3-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
All callers of hmm_range_fault depend on CONFIG_HMM_MIRROR, so don't
bother with a stub.
Link: https://lore.kernel.org/r/20200316135310.899364-2-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
pmd_to_hmm_pfn_flags() already checks it and makes the cpu flags 0. If no
fault is requested then the pfns should be returned with the not valid
flags.
It should not unconditionally fault if faulting is not requested.
Fixes: 2aee09d8c116 ("mm/hmm: change hmm_vma_fault() to allow write fault on page basis")
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|