summaryrefslogtreecommitdiff
path: root/kernel/kexec_core.c
diff options
context:
space:
mode:
authorPavel Tatashin <pasha.tatashin@soleen.com>2019-12-04 10:59:15 -0500
committerWill Deacon <will@kernel.org>2020-01-08 16:32:55 +0000
commitde68e4daea9084df4c614d31e2061d5d31bf24f4 (patch)
treed703d520c70191a0069260df737fe25f56303181 /kernel/kexec_core.c
parentd42cc530b18db2dd9de621238d33670841aabc36 (diff)
kexec: add machine_kexec_post_load()
It is the same as machine_kexec_prepare(), but is called after segments are loaded. This way, can do processing work with already loaded relocation segments. One such example is arm64: it has to have segments loaded in order to create a page table, but it cannot do it during kexec time, because at that time allocations won't be possible anymore. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Acked-by: Dave Young <dyoung@redhat.com> Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'kernel/kexec_core.c')
-rw-r--r--kernel/kexec_core.c6
1 files changed, 6 insertions, 0 deletions
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index f7ae04b8de6f..c19c0dad1ebe 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -589,6 +589,12 @@ static void kimage_free_extra_pages(struct kimage *image)
kimage_free_page_list(&image->unusable_pages);
}
+
+int __weak machine_kexec_post_load(struct kimage *image)
+{
+ return 0;
+}
+
void kimage_terminate(struct kimage *image)
{
if (*image->entry != 0)