summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-01-04Bluetooth: hci_bcm: new ACPI IDsHeikki Krogerus
These are used at least by Acer with BCM43241. Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2016-01-04Bluetooth: hci_bcm: move all Broadcom ACPI IDs to BCM HCI driverHeikki Krogerus
The IDs should all be for Broadcom BCM43241 module, and hci_bcm is now the proper driver for them. This removes one of two different ways of handling PM with the module. Cc: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2016-01-04Merge branch 'pnfs_generic'Trond Myklebust
* pnfs_generic: NFSv4.1/pNFS: Cleanup constify struct pnfs_layout_range arguments NFSv4.1/pnfs: Cleanup copying of pnfs_layout_range structures NFSv4.1/pNFS: Cleanup pnfs_mark_matching_lsegs_invalid() NFSv4.1/pNFS: Fix a race in initiate_file_draining() NFSv4.1/pNFS: pnfs_error_mark_layout_for_return() must always return layout NFSv4.1/pNFS: pnfs_mark_matching_lsegs_return() should set the iomode NFSv4.1/pNFS: Use nfs4_stateid_copy for copying stateids NFSv4.1/pNFS: Don't pass stateids by value to pnfs_send_layoutreturn() NFS: Relax requirements in nfs_flush_incompatible NFSv4.1/pNFS: Don't queue up a new commit if the layout segment is invalid NFS: Allow multiple commit requests in flight per file NFS/pNFS: Fix up pNFS write reschedule layering violations and bugs NFSv4: List stateid information in the callback tracepoints NFSv4.1/pNFS: Don't return NFS4ERR_DELAY unnecessarily in CB_LAYOUTRECALL NFSv4.1/pNFS: Ensure we enforce RFC5661 Section 12.5.5.2.1 pNFS: If we have to delay the layout callback, mark the layout for return NFSv4.1/pNFS: Add a helper to mark the layout as returned pNFS: Ensure nfs4_layoutget_prepare returns the correct error
2016-01-04NFSv4.1/pNFS: Cleanup constify struct pnfs_layout_range argumentsTrond Myklebust
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04NFSv4.1/pnfs: Cleanup copying of pnfs_layout_range structuresTrond Myklebust
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04NFSv4.1/pNFS: Cleanup pnfs_mark_matching_lsegs_invalid()Trond Myklebust
Make it more obvious what we're returning... Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04NFSv4.1/pNFS: Fix a race in initiate_file_draining()Trond Myklebust
Peng Tao points out that the call to pnfs_mark_matching_lsegs_return() could race with pnfs_put_lseg(), in which case the layout segment is cleared, but no layoutreturn will be sent. Fix is to replace the call to pnfs_mark_matching_lsegs_invalid(). Reported-by: Peng Tao <tao.peng@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04NFSv4.1/pNFS: pnfs_error_mark_layout_for_return() must always return layoutTrond Myklebust
Fix a bug whereby if all the layout segments could be immediately freed, the call to pnfs_error_mark_layout_for_return() would never result in a layoutreturn. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04NFSv4.1/pNFS: pnfs_mark_matching_lsegs_return() should set the iomodeTrond Myklebust
If pnfs_mark_matching_lsegs_return() needs to mark a layout segment for return, then it must also set the return iomode. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04NFSv4.1/pNFS: Use nfs4_stateid_copy for copying stateidsTrond Myklebust
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04NFSv4.1/pNFS: Don't pass stateids by value to pnfs_send_layoutreturn()Trond Myklebust
A stateid is a structure, pass it as a pointer. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2016-01-04drm/radeon: Drop unnecessary unsigned int < 0 checkThierry Reding
Unsigned integers can never be negative, so drop this check. Cc: Christian König <christian.koenig@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2016-01-04xen/blkfront: Fix crash if backend doesn't follow the right states.Konrad Rzeszutek Wilk
We have split the setting up of all the resources in two steps: 1) talk_to_blkback - which figures out the num_ring_pages (from the default value of zero), sets up shadow and so 2) blkfront_connect - does the real part of filling out the internal structures. The problem is if we bypass the 1) step and go straight to 2) and call blkfront_setup_indirect where we use the macro BLK_RING_SIZE - which returns an negative value (because sz is zero - since num_ring_pages is zero - since it has never been set). We can fix this by making sure that we always have called talk_to_blkback before going to blkfront_connect. Or we could set in blkfront_probe info->nr_ring_pages = 1 to have a default value. But that looks odd - as we haven't actually negotiated any ring size. This patch changes XenbusStateConnected state to detect if we haven't done the initial handshake - and if so continue on as if were in XenbusStateInitWait state. We also roll the error recovery (freeing the structure) into talk_to_blkback error path - which is safe since that function is only called from blkback_changed. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkback: Fix two memory leaks.Bob Liu
This patch fixs two memleaks: backtrace: [<ffffffff817ba5e8>] kmemleak_alloc+0x28/0x50 [<ffffffff81205e3b>] kmem_cache_alloc+0xbb/0x1d0 [<ffffffff81534028>] xen_blkbk_probe+0x58/0x230 [<ffffffff8146adb6>] xenbus_dev_probe+0x76/0x130 [<ffffffff81511716>] driver_probe_device+0x166/0x2c0 [<ffffffff815119bc>] __device_attach_driver+0xac/0xb0 [<ffffffff8150fa57>] bus_for_each_drv+0x67/0x90 [<ffffffff81511ab7>] __device_attach+0xc7/0x120 [<ffffffff81511b23>] device_initial_probe+0x13/0x20 [<ffffffff8151059a>] bus_probe_device+0x9a/0xb0 [<ffffffff8150f0a1>] device_add+0x3b1/0x5c0 [<ffffffff8150f47e>] device_register+0x1e/0x30 [<ffffffff8146a9e8>] xenbus_probe_node+0x158/0x170 [<ffffffff8146abaf>] xenbus_dev_changed+0x1af/0x1c0 [<ffffffff8146b1bb>] backend_changed+0x1b/0x20 [<ffffffff81468ca6>] xenwatch_thread+0xb6/0x160 unreferenced object 0xffff880007ba8ef8 (size 224): backtrace: [<ffffffff817ba5e8>] kmemleak_alloc+0x28/0x50 [<ffffffff81205c73>] __kmalloc+0xd3/0x1e0 [<ffffffff81534d87>] frontend_changed+0x2c7/0x580 [<ffffffff8146af12>] xenbus_otherend_changed+0xa2/0xb0 [<ffffffff8146b2c0>] frontend_changed+0x10/0x20 [<ffffffff81468ca6>] xenwatch_thread+0xb6/0x160 [<ffffffff810d3e97>] kthread+0xd7/0xf0 [<ffffffff817c4a9f>] ret_from_fork+0x3f/0x70 [<ffffffffffffffff>] 0xffffffffffffffff unreferenced object 0xffff8800048dcd38 (size 224): The first leak is caused by not put() the be->blkif reference which we had gotten in xen_blkif_alloc(), while the second is us not freeing blkif->rings in the right place. Signed-off-by: Bob Liu <bob.liu@oracle.com> Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkback: make st_ statistics per ringBob Liu
Make st_* statistics per ring and the VBD sysfs would iterate over all the rings. Note: xenvbd_sysfs_delif() is called in xen_blkbk_remove() before all rings are torn down, so it's safe. Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> --- v2: Aligned the variables on the same column.
2016-01-04xen/blkfront: Handle non-indirect grant with 64KB pagesJulien Grall
The minimal size of request in the block framework is always PAGE_SIZE. It means that when 64KB guest is support, the request will at least be 64KB. Although, if the backend doesn't support indirect descriptor (such as QDISK in QEMU), a ring request is only able to accommodate 11 segments of 4KB (i.e 44KB). The current frontend is assuming that an I/O request will always fit in a ring request. This is not true any more when using 64KB page granularity and will therefore crash during boot. On ARM64, the ABI is completely neutral to the page granularity used by the domU. The guest has the choice between different page granularity supported by the processors (for instance on ARM64: 4KB, 16KB, 64KB). This can't be enforced by the hypervisor and therefore it's possible to run guests using different page granularity. So we can't mandate the block backend to support indirect descriptor when the frontend is using 64KB page granularity and have to fix it properly in the frontend. The solution exposed below is based on modifying directly the frontend guest rather than asking the block framework to support smaller size (i.e < PAGE_SIZE). This is because the change is the block framework are not trivial as everything seems to relying on a struct *page (see [1]). Although, it may be possible that someone succeed to do it in the future and we would therefore be able to use it. Given that a block request may not fit in a single ring request, a second request is introduced for the data that cannot fit in the first one. This means that the second ring request should never be used on Linux if the page size is smaller than 44KB. To achieve the support of the extra ring request, the block queue size is divided by two. Therefore, the ring will always contain enough space to accommodate 2 ring requests. While this will reduce the overall performance, it will make the implementation more contained. The way forward to get better performance is to implement in the backend either indirect descriptor or multiple grants ring. Note that the parameters blk_queue_max_* helpers haven't been updated. The block code will set the mimimum size supported and we may be able to support directly any change in the block framework that lower down the minimal size of a request. [1] http://lists.xen.org/archives/html/xen-devel/2015-08/msg02200.html Signed-off-by: Julien Grall <julien.grall@citrix.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen-blkfront: Introduce blkif_ring_get_requestJulien Grall
The code to get a request is always the same. Therefore we can factorize it in a single function. Signed-off-by: Julien Grall <julien.grall@citrix.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen-blkback: clear PF_NOFREEZE for xen_blkif_schedule()Jiri Kosina
xen_blkif_schedule() kthread calls try_to_freeze() at the beginning of every attempt to purge the LRU. This operation can't ever succeed though, as the kthread hasn't marked itself as freezable. Before (hopefully eventually) kthread freezing gets converted to fileystem freezing, we'd rather mark xen_blkif_schedule() freezable (as it can generate I/O during suspend). Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkback: Free resources if connect_ring failed.Konrad Rzeszutek Wilk
With the multi-queue support we could fail at setting up some of the rings and fail the connection. That meant that all resources tied to rings[0..n-1] (where n is the ring that failed to be setup). Eventually the frontend will switch to the states and we will call xen_blkif_disconnect. However we do not want to be at the mercy of the frontend deciding when to change states. This allows us to do the cleanup right away and freeing resources. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blocks: Return -EXX instead of -1Konrad Rzeszutek Wilk
Lets return sensible values instead of -1. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkback: make pool of persistent grants and free pages per-queueBob Liu
Make pool of persistent grants and free pages per-queue/ring instead of per-device to get better scalability. Test was done based on null_blk driver: dom0: v4.2-rc8 16vcpus 10GB "modprobe null_blk" domu: v4.2-rc8 16vcpus 10GB [test] rw=read direct=1 ioengine=libaio bs=4k time_based runtime=30 filename=/dev/xvdb numjobs=16 iodepth=64 iodepth_batch=64 iodepth_batch_complete=64 group_reporting Results: iops1: After patch "xen/blkfront: make persistent grants per-queue". iops2: After this patch. Queues: 1 4 8 16 Iops orig(k): 810 1064 780 700 Iops1(k): 810 1230(~20%) 1024(~20%) 850(~20%) Iops2(k): 810 1410(~35%) 1354(~75%) 1440(~100%) With 4 queues after this commit we can get ~75% increase in IOPS, and performance won't drop if increasing queue numbers. Please find the respective chart in this link: https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0 Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkback: get the number of hardware queues/rings from blkfrontBob Liu
Backend advertises "multi-queue-max-queues" to front, also get the negotiated number from "multi-queue-num-queues" written by blkfront. Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkback: pseudo support for multi hardware queues/ringsKonrad Rzeszutek Wilk
Preparatory patch for multiple hardware queues (rings). The number of rings is unconditionally set to 1, larger number will be enabled in "xen/blkback: get the number of hardware queues/rings from blkfront". Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> --- v2: Align variables in the structures.
2016-01-04xen/blkback: separate ring information out of struct xen_blkifBob Liu
Split per ring information to an new structure "xen_blkif_ring", so that one vbd device can be associated with one or more rings/hardware queues. Introduce 'pers_gnts_lock' to protect the pool of persistent grants since we may have multi backend threads. This patch is a preparation for supporting multi hardware queues/rings. Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> --- v2: Align the variables in the structure.
2016-01-04xen/blkfront: correct setting for xen_blkif_max_ring_orderPeng Fan
According to this piece code: " pr_info("Invalid max_ring_order (%d), will use default max: %d.\n", xen_blkif_max_ring_order, XENBUS_MAX_RING_GRANT_ORDER); " if xen_blkif_max_ring_order is bigger that XENBUS_MAX_RING_GRANT_ORDER, need to set xen_blkif_max_ring_order using XENBUS_MAX_RING_GRANT_ORDER, but not 0. Signed-off-by: Peng Fan <van.freenix@gmail.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: "Roger Pau Monné" <roger.pau@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkfront: make persistent grants pool per-queueBob Liu
Make persistent grants per-queue/ring instead of per-device, so that we can drop the 'dev_lock' and get better scalability. Test was done based on null_blk driver: dom0: v4.2-rc8 16vcpus 10GB "modprobe null_blk" domu: v4.2-rc8 16vcpus 10GB [test] rw=read direct=1 ioengine=libaio bs=4k time_based runtime=30 filename=/dev/xvdb numjobs=16 iodepth=64 iodepth_batch=64 iodepth_batch_complete=64 group_reporting Queues: 1 4 8 16 Iops orig(k): 810 1064 780 700 Iops patched(k): 810 1230(~20%) 1024(~20%) 850(~20%) Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkfront: Remove duplicate setting of ->xbdev.Bob Liu
We do the same exact operations a bit earlier in the function. Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkfront: Cleanup of comments, fix unaligned variables, and syntax errors.Konrad Rzeszutek Wilk
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkfront: negotiate number of queues/rings to be used with backendBob Liu
The max number of hardware queues for xen/blkfront is set by parameter 'max_queues'(default 4), while it is also capped by the max value that the xen/blkback exposes through XenStore key 'multi-queue-max-queues'. The negotiated number is the smaller one and would be written back to xenstore as "multi-queue-num-queues", blkback needs to read this negotiated number. Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkfront: split per device io_lockBob Liu
After patch "xen/blkfront: separate per ring information out of device info", per-ring data is protected by a per-device lock ('io_lock'). This is not a good way and will effect the scalability, so introduce a per-ring lock ('ring_lock'). The old 'io_lock' is renamed to 'dev_lock' which protects the ->grants list and ->persistent_gnts_c which are shared by all rings. Note that in 'blkfront_probe' the 'blkfront_info' is setup via kzalloc so setting ->persistent_gnts_c to zero is not needed. Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04xen/blkfront: pseudo support for multi hardware queues/ringsBob Liu
Preparatory patch for multiple hardware queues (rings). The number of rings is unconditionally set to 1, larger number will be enabled in patch "xen/blkfront: negotiate number of queues/rings to be used with backend" so as to make review easier. Note that blkfront_gather_backend_features does not call blkfront_setup_indirect anymore (as that needs to be done per ring). That means that in blkif_recover/blkif_connect we have to do it in a loop (bounded by nr_rings). Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2016-01-04drm/dp/mst: fix in RAD element accessMykola Lysenko
This is needed to receive correct port number from RAD, so MSTB could be found Acked-by: Dave Airlie <airlied@gmail.com> Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2016-01-04drm/dp/mst: fix in MSTB RAD initializationMykola Lysenko
This fix is needed to support more then two branch displays, so RAD address consist at least of 2 elements Acked-by: Dave Airlie <airlied@gmail.com> Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2016-01-04drm/dp/mst: always send reply for UP requestMykola Lysenko
We should always send reply for UP request in order to make downstream device clean-up resources appropriately. Issue was that reply for UP request was sent only once. Acked-by: Dave Airlie <airlied@gmail.com> Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2016-01-04drm/dp/mst: process broadcast messages correctlyMykola Lysenko
In case broadcast message received in UP request, RAD cannot be used to identify message originator. Message should be parsed, originator should be found by GUID from parsed message. Also reply with broadcast in case broadcast message received (for now it is always broadcast) Acked-by: Dave Airlie <airlied@gmail.com> Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2016-01-04hwmon: (ibmaem) constify aem_rw_sensor_template and aem_ro_sensor_template ↵Julia Lawall
structures The aem_rw_sensor_template and aem_ro_sensor_template structures are never modified, so declare them as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2016-01-04netfilter: nf_ct_helper: define pr_fmt()Pablo Neira Ayuso
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2016-01-04netfilter: nf_tables: add forward expression to the netdev familyPablo Neira Ayuso
You can use this to forward packets from ingress to the egress path of the specified interface. This provides a fast path to bounce packets from one interface to another specific destination interface. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2016-01-04ARM: 8481/2: drivers: psci: replace psci firmware callsJens Wiklander
Switch to use a generic interface for issuing SMC/HVC based on ARM SMC Calling Convention. Removes now the now unused psci-call.S. Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-01-04ARM: 8480/2: arm64: add implementation for arm-smcccJens Wiklander
Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-01-04ARM: 8479/2: add implementation for arm-smcccJens Wiklander
Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC for architectures that may support arm-smccc. It's the responsibility of the caller to know if the SMC instruction is supported by the platform. Reviewed-by: Lars Persson <lars.persson@axis.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-01-04ARM: 8478/2: arm/arm64: add arm-smcccJens Wiklander
Adds helpers to do SMC and HVC based on ARM SMC Calling Convention. CONFIG_HAVE_ARM_SMCCC is enabled for architectures that may support the SMC or HVC instruction. It's the responsibility of the caller to know if the SMC instruction is supported by the platform. This patch doesn't provide an implementation of the declared functions. Later patches will bring in implementations and set CONFIG_HAVE_ARM_SMCCC for ARM and ARM64 respectively. Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-01-04ftrace/scripts: Fix incorrect use of sprintf in recordmcountColin Ian King
Fix build warning: scripts/recordmcount.c:589:4: warning: format not a string literal and no format arguments [-Wformat-security] sprintf("%s: failed\n", file); Fixes: a50bd43935586 ("ftrace/scripts: Have recordmcount copy the object file") Link: http://lkml.kernel.org/r/1451516801-16951-1-git-send-email-colin.king@canonical.com Cc: Li Bin <huawei.libin@huawei.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org # 2.6.37+ Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2016-01-04drm: powerplay: use div64_s64 instead of do_divArnd Bergmann
The newly added code for Fiji creates a correct compiler warning about invalid use of the do_div macro: In file included from powerplay/hwmgr/ppatomctrl.c:31:0: drivers/gpu/drm/amd/amdgpu/../powerplay/hwmgr/ppevvmath.h: In function 'fDivide': drivers/gpu/drm/amd/amdgpu/../powerplay/hwmgr/ppevvmath.h:382:89: warning: comparison of distinct pointer types lacks a cast do_div(longlongX, longlongY); /*Q(32,32) divided by Q(16,16) = Q(16,16) Back to original format */ do_div() divides an unsigned 64-bit number by an unsigned 32-bit number. The code instead wants to divide two signed 64-bit numbers, which is done using the div64_s64 function. Reviewed-by: Thierry Reding <treding@nvidia.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 770911a3cfbb ("drm/amd/powerplay: add/update headers for Fiji SMU and DPM") Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2016-01-04[um] mconsole: don't open-code memdup_user_nul()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-01-04[um] hostaudio: don't open-code memdup_user()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-01-04HFS wants 8Kb per-superblock allocation; just use kmalloc()Al Viro
... rather than play with __get_free_pages() (and figuring out the allocation order, etc.) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-01-04jfs: microoptimize get_zeroed_page / virt_to_pageAl Viro
get_zeroed_page does alloc_page and returns page_address of the result; subsequent virt_to_page will recover the page, but since the caller needs both page and its page_address() anyway, why bother going through that wrapper at all? Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-01-04... and a couple in net/9pAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-01-04md: more open-coded offset_in_page()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>