1. 01 May, 2016 3 commits
    • Vitaly Kuznetsov's avatar
      Drivers: hv: balloon: don't crash when memory is added in non-sorted order · 77c0c973
      Vitaly Kuznetsov authored
      When we iterate through all HA regions in handle_pg_range() we have an
      assumption that all these regions are sorted in the list and the
      'start_pfn >= has->end_pfn' check is enough to find the proper region.
      Unfortunately it's not the case with WS2016 where host can hot-add regions
      in a different order. We end up modifying the wrong HA region and crashing
      later on pages online. Modify the check to make sure we found the region
      we were searching for while iterating. Fix the same check in pfn_covered()
      as well.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      77c0c973
    • Vitaly Kuznetsov's avatar
      Drivers: hv: vmbus: handle various crash scenarios · cd95aad5
      Vitaly Kuznetsov authored
      Kdump keeps biting. Turns out CHANNELMSG_UNLOAD_RESPONSE is always
      delivered to the CPU which was used for initial contact or to CPU0
      depending on host version. vmbus_wait_for_unload() doesn't account for
      the fact that in case we're crashing on some other CPU we won't get the
      CHANNELMSG_UNLOAD_RESPONSE message and our wait on the current CPU will
      never end.
      
      Do the following:
      1) Check for completion_done() in the loop. In case interrupt handler is
         still alive we'll get the confirmation we need.
      
      2) Read message pages for all CPUs message page as we're unsure where
         CHANNELMSG_UNLOAD_RESPONSE is going to be delivered to. We can race with
         still-alive interrupt handler doing the same, add cmpxchg() to
         vmbus_signal_eom() to not lose CHANNELMSG_UNLOAD_RESPONSE message.
      
      3) Cleanup message pages on all CPUs. This is required (at least for the
         current CPU as we're clearing CPU0 messages now but we may want to bring
         up additional CPUs on crash) as new messages won't be delivered till we
         consume what's pending. On boot we'll place message pages somewhere else
         and we won't be able to read stale messages.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cd95aad5
    • Vitaly Kuznetsov's avatar
      Drivers: hv: kvp: fix IP Failover · 4dbfc2e6
      Vitaly Kuznetsov authored
      Hyper-V VMs can be replicated to another hosts and there is a feature to
      set different IP for replicas, it is called 'Failover TCP/IP'. When
      such guest starts Hyper-V host sends it KVP_OP_SET_IP_INFO message as soon
      as we finish negotiation procedure. The problem is that it can happen (and
      it actually happens) before userspace daemon connects and we reply with
      HV_E_FAIL to the message. As there are no repetitions we fail to set the
      requested IP.
      
      Solve the issue by postponing our reply to the negotiation message till
      userspace daemon is connected. We can't wait too long as there is a
      host-side timeout (cca. 75 seconds) and if we fail to reply in this time
      frame the whole KVP service will become inactive. The solution is not
      ideal - if it takes userspace daemon more than 60 seconds to connect
      IP Failover will still fail but I don't see a solution with our current
      separation between kernel and userspace parts.
      
      Other two modules (VSS and FCOPY) don't require such delay, leave them
      untouched.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4dbfc2e6
  2. 30 Apr, 2016 27 commits
  3. 27 Apr, 2016 2 commits
  4. 19 Apr, 2016 1 commit
  5. 18 Apr, 2016 2 commits
  6. 17 Apr, 2016 5 commits