summaryrefslogtreecommitdiff
path: root/include/xen
diff options
context:
space:
mode:
authorMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com>2018-09-07 18:49:08 +0200
committerBoris Ostrovsky <boris.ostrovsky@oracle.com>2018-09-14 08:51:10 -0400
commit197ecb3802c04499d8ff4f8cb28f6efa008067db (patch)
tree1041826ddc2a9de936923f3563a6e51fefac7511 /include/xen
parent87dffe86d406bee8782cac2db035acb9a28620a7 (diff)
xen/balloon: add runtime control for scrubbing ballooned out pages
Scrubbing pages on initial balloon down can take some time, especially in nested virtualization case (nested EPT is slow). When HVM/PVH guest is started with memory= significantly lower than maxmem=, all the extra pages will be scrubbed before returning to Xen. But since most of them weren't used at all at that point, Xen needs to populate them first (from populate-on-demand pool). In nested virt case (Xen inside KVM) this slows down the guest boot by 15-30s with just 1.5GB needed to be returned to Xen. Add runtime parameter to enable/disable it, to allow initially disabling scrubbing, then enable it back during boot (for example in initramfs). Such usage relies on assumption that a) most pages ballooned out during initial boot weren't used at all, and b) even if they were, very few secrets are in the guest at that time (before any serious userspace kicks in). Convert CONFIG_XEN_SCRUB_PAGES to CONFIG_XEN_SCRUB_PAGES_DEFAULT (also enabled by default), controlling default value for the new runtime switch. Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Diffstat (limited to 'include/xen')
-rw-r--r--include/xen/mem-reservation.h7
1 files changed, 4 insertions, 3 deletions
diff --git a/include/xen/mem-reservation.h b/include/xen/mem-reservation.h
index 80b52b4945e9..a2ab516fcd2c 100644
--- a/include/xen/mem-reservation.h
+++ b/include/xen/mem-reservation.h
@@ -17,11 +17,12 @@
#include <xen/page.h>
+extern bool xen_scrub_pages;
+
static inline void xenmem_reservation_scrub_page(struct page *page)
{
-#ifdef CONFIG_XEN_SCRUB_PAGES
- clear_highpage(page);
-#endif
+ if (xen_scrub_pages)
+ clear_highpage(page);
}
#ifdef CONFIG_XEN_HAVE_PVMMU