• Kirill A. Shutemov's avatar
    x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad() · 195edce0
    Kirill A. Shutemov authored
    tl;dr: There is a race in the TDX private<=>shared conversion code
           which could kill the TDX guest.  Fix it by changing conversion
           ordering to eliminate the window.
    
    TDX hardware maintains metadata to track which pages are private and
    shared. Additionally, TDX guests use the guest x86 page tables to
    specify whether a given mapping is intended to be private or shared.
    Bad things happen when the intent and metadata do not match.
    
    So there are two thing in play:
     1. "the page" -- the physical TDX page metadata
     2. "the mapping" -- the guest-controlled x86 page table intent
    
    For instance, an unrecoverable exit to VMM occurs if a guest touches a
    private mapping that points to a shared physical page.
    
    In summary:
    	* Private mapping => Private Page == OK (obviously)
    	* Shared mapping  => Shared Page  == OK (obviously)
    	* Private mapping => Shared Page  == BIG BOOM!
    	* Shared mapping  => Private Page == OK-ish
    	  (It will read generate a recoverable #VE via handle_mmio())
    
    Enter load_unaligned_zeropad(). It can touch memory that is adjacent but
    otherwise unrelated to the memory it needs to touch. It will cause one
    of those unrecoverable exits (aka. BIG BOOM) if it blunders into a
    shared mapping pointing to a private page.
    
    This is a problem when __set_memory_enc_pgtable() converts pages from
    shared to private. It first changes the mapping and second modifies
    the TDX page metadata.  It's moving from:
    
            * Shared mapping  => Shared Page  == OK
    to:
            * Private mapping => Shared Page  == BIG BOOM!
    
    This means that there is a window with a shared mapping pointing to a
    private page where load_unaligned_zeropad() can strike.
    
    Add a TDX handler for guest.enc_status_change_prepare(). This converts
    the page from shared to private *before* the page becomes private.  This
    ensures that there is never a private mapping to a shared page.
    
    Leave a guest.enc_status_change_finish() in place but only use it for
    private=>shared conversions.  This will delay updating the TDX metadata
    marking the page private until *after* the mapping matches the metadata.
    This also ensures that there is never a private mapping to a shared page.
    
    [ dhansen: rewrite changelog ]
    
    Fixes: 7dbde763 ("x86/mm/cpa: Add support for TDX shared memory")
    Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
    Reviewed-by: default avatarKuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
    Link: https://lore.kernel.org/all/20230606095622.1939-3-kirill.shutemov%40linux.intel.com
    195edce0
tdx.c 23.4 KB