Skip to content

Commit 9fef276

Browse files
mhklinuxliuw
authored andcommitted
x86/hyperv: Use slow_virt_to_phys() in page transition hypervisor callback
In preparation for temporarily marking pages not present during a transition between encrypted and decrypted, use slow_virt_to_phys() in the hypervisor callback. As long as the PFN is correct, slow_virt_to_phys() works even if the leaf PTE is not present. The existing functions that depend on vmalloc_to_page() all require that the leaf PTE be marked present, so they don't work. Update the comments for slow_virt_to_phys() to note this broader usage and the requirement to work even if the PTE is not marked present. Signed-off-by: Michael Kelley <mhklinux@outlook.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Link: https://lore.kernel.org/r/20240116022008.1023398-2-mhklinux@outlook.com Signed-off-by: Wei Liu <wei.liu@kernel.org> Message-ID: <20240116022008.1023398-2-mhklinux@outlook.com>
1 parent 04ed680 commit 9fef276

2 files changed

Lines changed: 19 additions & 5 deletions

File tree

arch/x86/hyperv/ivm.c

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -515,6 +515,8 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
515515
enum hv_mem_host_visibility visibility = enc ?
516516
VMBUS_PAGE_NOT_VISIBLE : VMBUS_PAGE_VISIBLE_READ_WRITE;
517517
u64 *pfn_array;
518+
phys_addr_t paddr;
519+
void *vaddr;
518520
int ret = 0;
519521
bool result = true;
520522
int i, pfn;
@@ -524,7 +526,15 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
524526
return false;
525527

526528
for (i = 0, pfn = 0; i < pagecount; i++) {
527-
pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE);
529+
/*
530+
* Use slow_virt_to_phys() because the PRESENT bit has been
531+
* temporarily cleared in the PTEs. slow_virt_to_phys() works
532+
* without the PRESENT bit while virt_to_hvpfn() or similar
533+
* does not.
534+
*/
535+
vaddr = (void *)kbuffer + (i * HV_HYP_PAGE_SIZE);
536+
paddr = slow_virt_to_phys(vaddr);
537+
pfn_array[pfn] = paddr >> HV_HYP_PAGE_SHIFT;
528538
pfn++;
529539

530540
if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) {

arch/x86/mm/pat/set_memory.c

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -755,10 +755,14 @@ pmd_t *lookup_pmd_address(unsigned long address)
755755
* areas on 32-bit NUMA systems. The percpu areas can
756756
* end up in this kind of memory, for instance.
757757
*
758-
* This could be optimized, but it is only intended to be
759-
* used at initialization time, and keeping it
760-
* unoptimized should increase the testing coverage for
761-
* the more obscure platforms.
758+
* Note that as long as the PTEs are well-formed with correct PFNs, this
759+
* works without checking the PRESENT bit in the leaf PTE. This is unlike
760+
* the similar vmalloc_to_page() and derivatives. Callers may depend on
761+
* this behavior.
762+
*
763+
* This could be optimized, but it is only used in paths that are not perf
764+
* sensitive, and keeping it unoptimized should increase the testing coverage
765+
* for the more obscure platforms.
762766
*/
763767
phys_addr_t slow_virt_to_phys(void *__virt_addr)
764768
{

0 commit comments

Comments
 (0)