RecoverPoint for Virtual Machines (RPVM) provides local and remote replication in combination with continuous data protection for per-VM recovery to any point in time.  It supports both virtual disk types: VMDKs and RDMs.  The above diagram shows the architectural components, which include: a VMware vCenter plug-in, a RecoverPoint write-splitter embedded in the vSphere hypervisor, and RecoverPoint virtual appliances, all fully integrated in a VMware ESXi server environment.

The splitter is installed inside of each ESX node that will host VMs that you would like to protect and replicate. The splitters communicate with RecoverPoint virtual appliances which enable either local replication within the same ESX cluster, or to a remote site which hosts a replica ESX environment. The splitter splits out the Write IOs to the VMDK/RDM of a VM and sends a copy to the production VMDK and also to the RecoverPoint for VMs cluster.

The RecoverPoint virtual appliance doesn’t need to reside on the same ESX node or cluster as the protected VMs.  There is no requirement that a virtual appliance be installed on each ESX node with the protected VMs.  This flexibility gives RecoverPoint for VMs a competitive advantage over other products. RecoverPoint for VMs can either scale up (by including 2 to 8 vRPAs per cluster), or scale out (by adding another vRPA cluster). In release 5.0 now supports up to 50 vRPA clusters per vCenter, enabling protection of up to 5,000 VMs per VC with in a per VM granularity.

RPVM uses a journal-based implementation to store all the changes made to the protected VMs. Local protection provides a DVR-like rollback in time capability to any point, even to the last I/O transaction or just seconds before the data corruption occurred.

The vCenter Plug-in is installed on the vCenter Server and accessed via vSphere Web Client. ESXi Admins will perform and manage data protection for VMs using  vSphere Web Client Client.


RecoverPoint for VMs supports both Synchronous and Asynchronous data recovery:

  • Any copy can be made available as read/write
  • Changes to the copy can be incrementally reapplied to the primary on failback