2020-11-13 00:01:22 +02:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
/**
|
|
|
|
* Copyright(c) 2016-20 Intel Corporation.
|
|
|
|
*
|
|
|
|
* Contains the software defined data structures for enclaves.
|
|
|
|
*/
|
|
|
|
#ifndef _X86_ENCL_H
|
|
|
|
#define _X86_ENCL_H
|
|
|
|
|
|
|
|
#include <linux/cpumask.h>
|
|
|
|
#include <linux/kref.h>
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/mm_types.h>
|
|
|
|
#include <linux/mmu_notifier.h>
|
|
|
|
#include <linux/mutex.h>
|
|
|
|
#include <linux/notifier.h>
|
|
|
|
#include <linux/srcu.h>
|
|
|
|
#include <linux/workqueue.h>
|
|
|
|
#include <linux/xarray.h>
|
|
|
|
#include "sgx.h"
|
|
|
|
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
/* 'desc' bits holding the offset in the VA (version array) page. */
|
|
|
|
#define SGX_ENCL_PAGE_VA_OFFSET_MASK GENMASK_ULL(11, 3)
|
|
|
|
|
|
|
|
/* 'desc' bit marking that the page is being reclaimed. */
|
|
|
|
#define SGX_ENCL_PAGE_BEING_RECLAIMED BIT(3)
|
|
|
|
|
2020-11-13 00:01:22 +02:00
|
|
|
struct sgx_encl_page {
|
|
|
|
unsigned long desc;
|
2022-05-10 11:08:47 -07:00
|
|
|
unsigned long vm_max_prot_bits:8;
|
|
|
|
enum sgx_page_type type:16;
|
2020-11-13 00:01:22 +02:00
|
|
|
struct sgx_epc_page *epc_page;
|
|
|
|
struct sgx_encl *encl;
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
struct sgx_va_page *va_page;
|
2020-11-13 00:01:22 +02:00
|
|
|
};
|
|
|
|
|
2020-11-13 00:01:23 +02:00
|
|
|
enum sgx_encl_flags {
|
|
|
|
SGX_ENCL_IOCTL = BIT(0),
|
|
|
|
SGX_ENCL_DEBUG = BIT(1),
|
|
|
|
SGX_ENCL_CREATED = BIT(2),
|
2020-11-13 00:01:25 +02:00
|
|
|
SGX_ENCL_INITIALIZED = BIT(3),
|
2020-11-13 00:01:23 +02:00
|
|
|
};
|
|
|
|
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
struct sgx_encl_mm {
|
|
|
|
struct sgx_encl *encl;
|
|
|
|
struct mm_struct *mm;
|
|
|
|
struct list_head list;
|
|
|
|
struct mmu_notifier mmu_notifier;
|
|
|
|
};
|
|
|
|
|
2020-11-13 00:01:22 +02:00
|
|
|
struct sgx_encl {
|
|
|
|
unsigned long base;
|
|
|
|
unsigned long size;
|
2020-11-13 00:01:23 +02:00
|
|
|
unsigned long flags;
|
2020-11-13 00:01:22 +02:00
|
|
|
unsigned int page_cnt;
|
|
|
|
unsigned int secs_child_cnt;
|
|
|
|
struct mutex lock;
|
|
|
|
struct xarray page_array;
|
|
|
|
struct sgx_encl_page secs;
|
2020-11-13 00:01:25 +02:00
|
|
|
unsigned long attributes;
|
|
|
|
unsigned long attributes_mask;
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
|
|
|
|
cpumask_t cpumask;
|
|
|
|
struct file *backing;
|
|
|
|
struct kref refcount;
|
|
|
|
struct list_head va_pages;
|
|
|
|
unsigned long mm_list_version;
|
|
|
|
struct list_head mm_list;
|
|
|
|
spinlock_t mm_lock;
|
|
|
|
struct srcu_struct srcu;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define SGX_VA_SLOT_COUNT 512
|
|
|
|
|
|
|
|
struct sgx_va_page {
|
|
|
|
struct sgx_epc_page *epc_page;
|
|
|
|
DECLARE_BITMAP(slots, SGX_VA_SLOT_COUNT);
|
|
|
|
struct list_head list;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct sgx_backing {
|
|
|
|
struct page *contents;
|
|
|
|
struct page *pcmd;
|
|
|
|
unsigned long pcmd_offset;
|
2020-11-13 00:01:22 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
extern const struct vm_operations_struct sgx_vm_ops;
|
|
|
|
|
|
|
|
static inline int sgx_encl_find(struct mm_struct *mm, unsigned long addr,
|
|
|
|
struct vm_area_struct **vma)
|
|
|
|
{
|
|
|
|
struct vm_area_struct *result;
|
|
|
|
|
2021-06-28 19:39:14 -07:00
|
|
|
result = vma_lookup(mm, addr);
|
|
|
|
if (!result || result->vm_ops != &sgx_vm_ops)
|
2020-11-13 00:01:22 +02:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
*vma = result;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
|
2025-06-18 20:42:54 +01:00
|
|
|
unsigned long end, vm_flags_t vm_flags);
|
2020-11-13 00:01:22 +02:00
|
|
|
|
x86/sgx: Set active memcg prior to shmem allocation
When the system runs out of enclave memory, SGX can reclaim EPC pages
by swapping to normal RAM. These backing pages are allocated via a
per-enclave shared memory area. Since SGX allows unlimited over
commit on EPC memory, the reclaimer thread can allocate a large
number of backing RAM pages in response to EPC memory pressure.
When the shared memory backing RAM allocation occurs during
the reclaimer thread context, the shared memory is charged to
the root memory control group, and the shmem usage of the enclave
is not properly accounted for, making cgroups ineffective at
limiting the amount of RAM an enclave can consume.
For example, when using a cgroup to launch a set of test
enclaves, the kernel does not properly account for 50% - 75% of
shmem page allocations on average. In the worst case, when
nearly all allocations occur during the reclaimer thread, the
kernel accounts less than a percent of the amount of shmem used
by the enclave's cgroup to the correct cgroup.
SGX stores a list of mm_structs that are associated with
an enclave. Pick one of them during reclaim and charge that
mm's memcg with the shmem allocation. The one that gets picked
is arbitrary, but this list almost always only has one mm. The
cases where there is more than one mm with different memcg's
are not worth considering.
Create a new function - sgx_encl_alloc_backing(). This function
is used whenever a new backing storage page needs to be
allocated. Previously the same function was used for page
allocation as well as retrieving a previously allocated page.
Prior to backing page allocation, if there is a mm_struct associated
with the enclave that is requesting the allocation, it is set
as the active memory control group.
[ dhansen: - fix merge conflict with ELDU fixes
- check against actual ksgxd_tsk, not ->mm ]
Cc: stable@vger.kernel.org
Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Link: https://lkml.kernel.org/r/20220520174248.4918-1-kristen@linux.intel.com
2022-05-20 10:42:47 -07:00
|
|
|
bool current_is_ksgxd(void);
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
void sgx_encl_release(struct kref *ref);
|
|
|
|
int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm);
|
2022-05-10 11:08:43 -07:00
|
|
|
const cpumask_t *sgx_encl_cpumask(struct sgx_encl *encl);
|
x86/sgx: Set active memcg prior to shmem allocation
When the system runs out of enclave memory, SGX can reclaim EPC pages
by swapping to normal RAM. These backing pages are allocated via a
per-enclave shared memory area. Since SGX allows unlimited over
commit on EPC memory, the reclaimer thread can allocate a large
number of backing RAM pages in response to EPC memory pressure.
When the shared memory backing RAM allocation occurs during
the reclaimer thread context, the shared memory is charged to
the root memory control group, and the shmem usage of the enclave
is not properly accounted for, making cgroups ineffective at
limiting the amount of RAM an enclave can consume.
For example, when using a cgroup to launch a set of test
enclaves, the kernel does not properly account for 50% - 75% of
shmem page allocations on average. In the worst case, when
nearly all allocations occur during the reclaimer thread, the
kernel accounts less than a percent of the amount of shmem used
by the enclave's cgroup to the correct cgroup.
SGX stores a list of mm_structs that are associated with
an enclave. Pick one of them during reclaim and charge that
mm's memcg with the shmem allocation. The one that gets picked
is arbitrary, but this list almost always only has one mm. The
cases where there is more than one mm with different memcg's
are not worth considering.
Create a new function - sgx_encl_alloc_backing(). This function
is used whenever a new backing storage page needs to be
allocated. Previously the same function was used for page
allocation as well as retrieving a previously allocated page.
Prior to backing page allocation, if there is a mm_struct associated
with the enclave that is requesting the allocation, it is set
as the active memory control group.
[ dhansen: - fix merge conflict with ELDU fixes
- check against actual ksgxd_tsk, not ->mm ]
Cc: stable@vger.kernel.org
Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Link: https://lkml.kernel.org/r/20220520174248.4918-1-kristen@linux.intel.com
2022-05-20 10:42:47 -07:00
|
|
|
int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index,
|
|
|
|
struct sgx_backing *backing);
|
2022-05-12 14:50:57 -07:00
|
|
|
void sgx_encl_put_backing(struct sgx_backing *backing);
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
int sgx_encl_test_and_clear_young(struct mm_struct *mm,
|
|
|
|
struct sgx_encl_page *page);
|
2022-05-10 11:08:49 -07:00
|
|
|
struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
|
|
|
|
unsigned long offset,
|
|
|
|
u64 secinfo_flags);
|
2022-05-10 11:08:44 -07:00
|
|
|
void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr);
|
2022-05-10 11:08:50 -07:00
|
|
|
struct sgx_epc_page *sgx_alloc_va_page(bool reclaim);
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
|
|
|
|
void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
|
|
|
|
bool sgx_va_page_full(struct sgx_va_page *va_page);
|
x86/sgx: Wipe out EREMOVE from sgx_free_epc_page()
EREMOVE takes a page and removes any association between that page and
an enclave. It must be run on a page before it can be added into another
enclave. Currently, EREMOVE is run as part of pages being freed into the
SGX page allocator. It is not expected to fail, as it would indicate a
use-after-free of EPC pages. Rather than add the page back to the pool
of available EPC pages, the kernel intentionally leaks the page to avoid
additional errors in the future.
However, KVM does not track how guest pages are used, which means that
SGX virtualization use of EREMOVE might fail. Specifically, it is
legitimate that EREMOVE returns SGX_CHILD_PRESENT for EPC assigned to
KVM guest, because KVM/kernel doesn't track SECS pages.
To allow SGX/KVM to introduce a more permissive EREMOVE helper and
to let the SGX virtualization code use the allocator directly, break
out the EREMOVE call from the SGX page allocator. Rename the original
sgx_free_epc_page() to sgx_encl_free_epc_page(), indicating that
it is used to free an EPC page assigned to a host enclave. Replace
sgx_free_epc_page() with sgx_encl_free_epc_page() in all call sites so
there's no functional change.
At the same time, improve the error message when EREMOVE fails, and
add documentation to explain to the user what that failure means and
to suggest to the user what to do when this bug happens in the case it
happens.
[ bp: Massage commit message, fix typos and sanitize text, simplify. ]
Signed-off-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Link: https://lkml.kernel.org/r/20210325093057.122834-1-kai.huang@intel.com
2021-03-25 22:30:57 +13:00
|
|
|
void sgx_encl_free_epc_page(struct sgx_epc_page *page);
|
2022-05-10 11:08:41 -07:00
|
|
|
struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
|
|
|
|
unsigned long addr);
|
2022-05-10 11:08:50 -07:00
|
|
|
struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl, bool reclaim);
|
2022-05-10 11:08:48 -07:00
|
|
|
void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page);
|
x86/sgx: Add a page reclaimer
Just like normal RAM, there is a limited amount of enclave memory available
and overcommitting it is a very valuable tool to reduce resource use.
Introduce a simple reclaim mechanism for enclave pages.
In contrast to normal page reclaim, the kernel cannot directly access
enclave memory. To get around this, the SGX architecture provides a set of
functions to help. Among other things, these functions copy enclave memory
to and from normal memory, encrypting it and protecting its integrity in
the process.
Implement a page reclaimer by using these functions. Picks victim pages in
LRU fashion from all the enclaves running in the system. A new kernel
thread (ksgxswapd) reclaims pages in the background based on watermarks,
similar to normal kswapd.
All enclave pages can be reclaimed, architecturally. But, there are some
limits to this, such as the special SECS metadata page which must be
reclaimed last. The page version array (used to mitigate replaying old
reclaimed pages) is also architecturally reclaimable, but not yet
implemented. The end result is that the vast majority of enclave pages are
currently reclaimable.
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-22-jarkko@kernel.org
2020-11-13 00:01:32 +02:00
|
|
|
|
2020-11-13 00:01:22 +02:00
|
|
|
#endif /* _X86_ENCL_H */
|