summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2018-12-14kvm: x86: remove unnecessary recalculate_apic_mapPeng Hao
In the previous code, the variable apic_sw_disabled influences recalculate_apic_map. But in "KVM: x86: simplify kvm_apic_map" (commit: 3b5a5ffa928a3f875b0d5dd284eeb7c322e1688a), the access to apic_sw_disabled in recalculate_apic_map has been deleted. Signed-off-by: Peng Hao <peng.hao2@zte.com.cn> Reviewed-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: svm: remove unused struct definitionPeng Hao
structure svm_init_data is never used. So remove it. Signed-off-by: Peng Hao <peng.hao2@zte.com.cn> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: vmx: Skip all SYSCALL MSRs in setup_msrs() when !EFER.SCEJim Mattson
Like IA32_STAR, IA32_LSTAR and IA32_FMASK only need to contain guest values on VM-entry when the guest is in long mode and EFER.SCE is set. Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Peter Shier <pshier@google.com> Reviewed-by: Marc Orr <marcorr@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: vmx: Don't set hardware IA32_CSTAR MSR on VM-entryJim Mattson
SYSCALL raises #UD in compatibility mode on Intel CPUs, so it's pointless to load the guest's IA32_CSTAR value into the hardware MSR. IA32_CSTAR still provides 48 bits of storage on Intel CPUs that have CPUID.80000001:EDX.LM[bit 29] set, so we cannot remove it from the vmx_msr_index[] array. Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Peter Shier <pshier@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: vmx: Document the need for MSR_STAR in i386 buildsJim Mattson
Add a comment explaining why MSR_STAR must be included in vmx_msr_index[] even for i386 builds. The elided comment has not been relevant since move_msr_up() was introduced in commit a75beee6e4f5d ("KVM: VMX: Avoid saving and restoring msrs on lightweight vmexit"). Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Peter Shier <pshier@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: vmx: Set IA32_TSC_AUX for legacy mode guestsJim Mattson
RDTSCP is supported in legacy mode as well as long mode. The IA32_TSC_AUX MSR should be set to the correct guest value before entering any guest that supports RDTSCP. Fixes: 4e47c7a6d714 ("KVM: VMX: Add instruction rdtscp support for guest") Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: Peter Shier <pshier@google.com> Reviewed-by: Marc Orr <marcorr@google.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: nVMX: Move nested code to dedicated filesSean Christopherson
From a functional perspective, this is (supposed to be) a straight copy-paste of code. Code was moved piecemeal to nested.c as not all code that could/should be moved was obviously nested-only. The nested code was then re-ordered as needed to compile, i.e. stats may not show this is being a "pure" move despite there not being any intended changes in functionality. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Expose nested_vmx_allowed() to nested VMX as a non-inlineSean Christopherson
Exposing only the function allows @nested, i.e. the module param, to be statically defined in vmx.c, ensuring we aren't unnecessarily checking said variable in the nested code. nested_vmx_allowed() is exposed due to the need to verify nested support in vmx_{get,set}_nested_state(). The downside is that nested_vmx_allowed() likely won't be inlined in vmx_{get,set}_nested_state(), but that should be a non-issue as they're not a hot path. Keeping vmx_{get,set}_nested_state() in vmx.c isn't a viable option as they need access to several nested-only functions. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Expose various getters and setters to nested VMXSean Christopherson
...as they're used directly by the nested code. This will allow moving the bulk of the nested code out of vmx.c without concurrent changes to vmx.h. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Expose misc variables needed for nested VMXSean Christopherson
Exposed vmx_msr_index, vmx_return and host_efer via vmx.h so that the nested code can be moved out of vmx.c. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: nVMX: Move "vmcs12 to shadow/evmcs sync" to helper functionSean Christopherson
...so that the function doesn't need to be created when moving the nested code out of vmx.c. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: nVMX: Call nested_vmx_setup_ctls_msrs() iff @nested is trueSean Christopherson
...so that it doesn't need access to @nested. The only case where the provided struct isn't already zeroed is the call from vmx_create_vcpu() as setup_vmcs_config() zeroes the struct in the other use cases. This will allow @nested to be statically defined in vmx.c, i.e. this removes the last direct reference from nested code. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: nVMX: Set callbacks for nested functions during hardware setupSean Christopherson
...in nested-specific code so that they can eventually be moved out of vmx.c, e.g. into nested.c. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move the hardware {un}setup functions to the bottomSean Christopherson
...so that future patches can reference e.g. @kvm_vmx_exit_handlers without having to simultaneously move a big chunk of code. Speaking from experience, resolving merge conflicts is an absolute nightmare without pre-moving the code. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: x86: nVMX: Allow nested_enable_evmcs to be NULLSean Christopherson
...so that it can conditionally set by the VMX code, i.e. iff @nested is true. This will in turn allow it to be moved out of vmx.c and into a nested-specified file. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move nested hardware/vcpu {un}setup to helper functionsSean Christopherson
Eventually this will allow us to move the nested VMX code out of vmx.c. Note that this also effectively wraps @enable_shadow_vmcs with @nested so that it too can be moved out of vmx.c. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move VMX instruction wrappers to a dedicated header fileSean Christopherson
VMX has a few hundred lines of code just to wrap various VMX specific instructions, e.g. VMWREAD, INVVPID, etc... Move them to a dedicated header so it's easier to find/isolate the boilerplate. With this change, more inlines can be moved from vmx.c to vmx.h. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move eVMCS code to dedicated filesSean Christopherson
The header, evmcs.h, already exists and contains a fair amount of code, but there are a few pieces in vmx.c that can be moved verbatim. In addition, move an array definition to evmcs.c to prepare for multiple consumers of evmcs.h. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Add vmx.h to hold VMX definitionsSean Christopherson
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: nVMX: Move vmcs12 code to dedicated filesSean Christopherson
vmcs12 is the KVM-defined struct used to track a nested VMCS, e.g. a VMCS created by L1 for L2. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move VMCS definitions to dedicated fileSean Christopherson
This isn't intended to be a pure reflection of hardware, e.g. struct loaded_vmcs and struct vmcs_host_state are KVM-defined constructs. Similar to capabilities.h, this is a standalone file to avoid circular dependencies between yet-to-be-created vmx.h and nested.h files. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Expose various module param vars via capabilities.hSean Christopherson
Expose the variables associated with various module params that are needed by the nested VMX code. There is no ulterior logic for what variables are/aren't exposed, this is purely "what's needed by the nested code". Note that @nested is intentionally not exposed. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move capabilities structs and helpers to dedicated fileSean Christopherson
Defining a separate capabilities.h as opposed to putting this code in e.g. vmx.h avoids circular dependencies between (the yet-to-be-added) vmx.h and nested.h. The aforementioned circular dependencies are why struct nested_vmx_msrs also resides in capabilities instead of e.g. nested.h. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Pass vmx_capability struct to setup_vmcs_config()Sean Christopherson
...instead of referencing the global struct. This will allow moving setup_vmcs_config() to a separate file that may not have access to the global variable. Modify nested_vmx_setup_ctls_msrs() appropriately since vmx_capability.ept may not be accurate when called by vmx_check_processor_compat(). No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Properly handle dynamic VM Entry/Exit controlsSean Christopherson
EFER and PERF_GLOBAL_CTRL MSRs have dedicated VM Entry/Exit controls that KVM dynamically toggles based on whether or not the guest's value for each MSRs differs from the host. Handle the dynamic behavior by adding a helper that clears the dynamic bits so the bits aren't set when initializing the VMCS field outside of the dynamic toggling flow. This makes the handling consistent with similar behavior for other controls, e.g. pin, exec and sec_exec. More importantly, it eliminates two global bools that are stealthily modified by setup_vmcs_config. Opportunistically clean up a comment and print related to errata for IA32_PERF_GLOBAL_CTRL. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move caching of MSR_IA32_XSS to hardware_setup()Sean Christopherson
MSR_IA32_XSS has no relation to the VMCS whatsoever, it doesn't belong in setup_vmcs_config() and its reference to host_xss prevents moving setup_vmcs_config() to a dedicated file. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Drop the "vmx" prefix from vmx_evmcs.hSean Christopherson
VMX specific files now reside in a dedicated subdirectory, i.e. the file name prefix is redundant. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: rename vmx_shadow_fields.h to vmcs_shadow_fields.hSean Christopherson
VMX specific files now reside in a dedicated subdirectory. Drop the "vmx" prefix, which is redundant, and add a "vmcs" prefix to clarify that the file is referring to VMCS shadow fields. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Move VMX specific files to a "vmx" subdirectorySean Christopherson
...to prepare for shattering vmx.c into multiple files without having to prepend "vmx_" to all new files. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: x86: Add requisite includes to hyperv.hSean Christopherson
Until this point vmx.c has been the only consumer and included the file after many others. Prepare for multiple consumers, i.e. the shattering of vmx.c Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: x86: Add requisite includes to kvm_cache_regs.hSean Christopherson
Until this point vmx.c has been the only consumer and included the file after many others. Prepare for multiple consumers, i.e. the shattering of vmx.c Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: VMX: Alphabetize the includes in vmx.cSean Christopherson
...to prepare for the creation of a "vmx" subdirectory that will contain a variety of headers. Clean things up now to avoid making a bigger mess in the future. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: nVMX: Allocate and configure VM{READ,WRITE} bitmaps iff enable_shadow_vmcsSean Christopherson
...and make enable_shadow_vmcs depend on nested. Aside from the obvious memory savings, this will allow moving the relevant code out of vmx.c in the future, e.g. to a nested specific file. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14KVM: nVMX: Free the VMREAD/VMWRITE bitmaps if alloc_kvm_area() failsSean Christopherson
Fixes: 34a1cd60d17f ("kvm: x86: vmx: move some vmx setting from vmx_init() to hardware_setup()") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: introduce manual dirty log reprotectPaolo Bonzini
There are two problems with KVM_GET_DIRTY_LOG. First, and less important, it can take kvm->mmu_lock for an extended period of time. Second, its user can actually see many false positives in some cases. The latter is due to a benign race like this: 1. KVM_GET_DIRTY_LOG returns a set of dirty pages and write protects them. 2. The guest modifies the pages, causing them to be marked ditry. 3. Userspace actually copies the pages. 4. KVM_GET_DIRTY_LOG returns those pages as dirty again, even though they were not written to since (3). This is especially a problem for large guests, where the time between (1) and (3) can be substantial. This patch introduces a new capability which, when enabled, makes KVM_GET_DIRTY_LOG not write-protect the pages it returns. Instead, userspace has to explicitly clear the dirty log bits just before using the content of the page. The new KVM_CLEAR_DIRTY_LOG ioctl can also operate on a 64-page granularity rather than requiring to sync a full memslot; this way, the mmu_lock is taken for small amounts of time, and only a small amount of time will pass between write protection of pages and the sending of their content. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: rename last argument to kvm_get_dirty_log_protectPaolo Bonzini
When manual dirty log reprotect will be enabled, kvm_get_dirty_log_protect's pointer argument will always be false on exit, because no TLB flush is needed until the manual re-protection operation. Rename it from "is_dirty" to "flush", which more accurately tells the caller what they have to do with it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14kvm: make KVM_CAP_ENABLE_CAP_VM architecture agnosticPaolo Bonzini
The first such capability to be handled in virt/kvm/ will be manual dirty page reprotection. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-14Merge branch 'khdr_fix' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest into HEAD Merge topic branch from Shuah.
2018-12-13dma-mapping: bypass indirect calls for dma-directChristoph Hellwig
Avoid expensive indirect calls in the fast path DMA mapping operations by directly calling the dma_direct_* ops if we are using the directly mapped DMA operations. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Tony Luck <tony.luck@intel.com>
2018-12-13dma-direct: merge swiotlb_dma_ops into the dma_direct codeChristoph Hellwig
While the dma-direct code is (relatively) clean and simple we actually have to use the swiotlb ops for the mapping on many architectures due to devices with addressing limits. Instead of keeping two implementations around this commit allows the dma-direct implementation to call the swiotlb bounce buffering functions and thus share the guts of the mapping implementation. This also simplified the dma-mapping setup on a few architectures where we don't have to differenciate which implementation to use. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Tony Luck <tony.luck@intel.com>
2018-12-13dma-mapping: always build the direct mapping codeChristoph Hellwig
All architectures except for sparc64 use the dma-direct code in some form, and even for sparc64 we had the discussion of a direct mapping mode a while ago. In preparation for directly calling the direct mapping code don't bother having it optionally but always build the code in. This is a minor hardship for some powerpc and arm configs that don't pull it in yet (although they should in a relase ot two), and sparc64 which currently doesn't need it at all, but it will reduce the ifdef mess we'd otherwise need significantly. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> Tested-by: Tony Luck <tony.luck@intel.com>
2018-12-13KVM: x86: Allow Qemu/KVM to use PVH entry pointMaran Wilson
For certain applications it is desirable to rapidly boot a KVM virtual machine. In cases where legacy hardware and software support within the guest is not needed, Qemu should be able to boot directly into the uncompressed Linux kernel binary without the need to run firmware. There already exists an ABI to allow this for Xen PVH guests and the ABI is supported by Linux and FreeBSD: https://xenbits.xen.org/docs/unstable/misc/pvh.html This patch enables Qemu to use that same entry point for booting KVM guests. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Suggested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Suggested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Move Xen code for getting mem map via hcall out of common fileMaran Wilson
We need to refactor PVH entry code so that support for other hypervisors like Qemu/KVM can be added more easily. The original design for PVH entry in Xen guests relies on being able to obtain the memory map from the hypervisor using a hypercall. When we extend the PVH entry ABI to support other hypervisors like Qemu/KVM, a new mechanism will be added that allows the guest to get the memory map without needing to use hypercalls. For Xen guests, the hypercall approach will still be supported. In preparation for adding support for other hypervisors, we can move the code that uses hypercalls into the Xen specific file. This will allow us to compile kernels in the future without CONFIG_XEN that are still capable of being booted as a Qemu/KVM guest via the PVH entry point. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Move Xen specific PVH VM initialization out of common fileMaran Wilson
We need to refactor PVH entry code so that support for other hypervisors like Qemu/KVM can be added more easily. This patch moves the small block of code used for initializing Xen PVH virtual machines into the Xen specific file. This initialization is not going to be needed for Qemu/KVM guests. Moving it out of the common file is going to allow us to compile kernels in the future without CONFIG_XEN that are still capable of being booted as a Qemu/KVM guest via the PVH entry point. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Create a new file for Xen specific PVH codeMaran Wilson
We need to refactor PVH entry code so that support for other hypervisors like Qemu/KVM can be added more easily. The first step in that direction is to create a new file that will eventually hold the Xen specific routines. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Move PVH entry code out of Xen specific treeMaran Wilson
Once hypervisors other than Xen start using the PVH entry point for starting VMs, we would like the option of being able to compile PVH entry capable kernels without enabling CONFIG_XEN and all the code that comes along with that. To allow that, we are moving the PVH code out of Xen and into files sitting at a higher level in the tree. This patch is not introducing any code or functional changes, just moving files from one location to another. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13xen/pvh: Split CONFIG_XEN_PVH into CONFIG_PVH and CONFIG_XEN_PVHMaran Wilson
In order to pave the way for hypervisors other than Xen to use the PVH entry point for VMs, we need to factor the PVH entry code into Xen specific and hypervisor agnostic components. The first step in doing that, is to create a new config option for PVH entry that can be enabled independently from CONFIG_XEN. Signed-off-by: Maran Wilson <maran.wilson@oracle.com> Reviewed-by: Juergen Gross <jgross@suse.com> Acked-by: Borislav Petkov <bp@suse.de> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2018-12-13crypto: x86/chacha - yield the FPU occasionallyEric Biggers
To improve responsiveness, yield the FPU (temporarily re-enabling preemption) every 4 KiB encrypted/decrypted, rather than keeping preemption disabled during the entire encryption/decryption operation. Alternatively we could do this for every skcipher_walk step, but steps may be small in some cases, and yielding the FPU is expensive on x86. Suggested-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-13crypto: x86/chacha - add XChaCha12 supportEric Biggers
Now that the x86_64 SIMD implementations of ChaCha20 and XChaCha20 have been refactored to support varying the number of rounds, add support for XChaCha12. This is identical to XChaCha20 except for the number of rounds, which is 12 instead of 20. This can be used by Adiantum. Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-13crypto: x86/chacha20 - refactor to allow varying number of roundsEric Biggers
In preparation for adding XChaCha12 support, rename/refactor the x86_64 SIMD implementations of ChaCha20 to support different numbers of rounds. Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>