Protecting NetApp HCI with Veeam

In this post we will know how the Veeam Availability Suite™ enables NetApp HCI virtual
machines. The following is the diagram of lab environment includes of vCenter Server, Veeam Backup Server and NetApp HCI.

First we login into vCenter Server, we can see the NetApp SolidFire Configuration and NetApp SolidFire Management are enabled on vCenter homepage.

Go to NetApp SolidFire Management and choose Datastore on Management tab, we can see the SolidFire datastore. In this demo, we will backup one MSSQL virtual machine with Veeam Availability Suite, this VM is running on this datastore.

Now we login into Veeam Backup & Replication Console.

Click on the Backup Infrastructure, we can see two components include of the vCenter server and Veeam backup server.

Now we can create a new backup job to protect this MSSQL virtual machine.

When the backcup job is completed, we can see the backup copy of MMSQL virtual machine that store on the backup disk volume.

In this simple demo we can see Veeam Availability Suite easy to protect the NetApp HCI virtual machines. We also choose three recovery options for virtual machine, “Instant VM Recovery”, “Explorer Reocovery” and “Full VM Recovery”.


VMware vSAN 6.7 Availability

Fault domain” is a term that comes up often in availability discussions. In IT, a fault
domain usually refers to a group of servers, storage, and/or networking components
that would be impacted collectively by an outage. A common example of this is a server
rack. If a top-of-rack switch or the power distribution unit for a server rack would fail, it
would take all the servers in that rack offline even though the server hardware is
functioning properly. That server rack is considered a fault domain.

Each host in a vSAN cluster is an implicit fault domain. vSAN automatically distributes
components of a vSAN object across fault domains in a cluster based on the Number of
Failures to Tolerate rule in the assigned storage policy.

When determining how many hosts or Fault Domains a cluster is comprised of, it is important to remember the following:

• For vSAN objects that will be protected with Mirroring, there must be 2n+1 hosts or Fault Domains for the level of protection chosen.

  • Protecting from 1 Failure would require (2×1+1) or 3 hosts.
  • Protecting from 2 Failures would require (2×2+1) or 5 hosts.
  • Protecting from 3 Failures would require (2×3+1) or 7 hosts.

• For vSAN objects that will be protected with Erasure Coding, there must be 2n+2 hosts or Fault Domains for the level of protection chosen.

  • RAID5 (3+1) requires (2×1+2) or 4 hosts.
  • RAID6 (4+2) requires (2×2+2) or 6 hosts.

Also consider that the loss of a Fault Domain, or hosts when Fault Domains are not configured, could result in no location to immediately rebuild to. VMware recommends having an additional host or Fault Domain to provide for the ability to rebuild in the event of a failure.

Also consider that the loss of a Fault Domain, or hosts when Fault Domains are not
configured, could result in no location to immediately rebuild to. VMware recommends
having an additional host or Fault Domain to provide for the ability to rebuild in the
event of a failure.

To mitigate this risk, we can place the servers in a vSAN cluster across server racks and
configure a fault domain for each rack in the vCenter\vSAN UI. This instructs vSAN to
distribute components across server racks to eliminate the risk of a rack failure taking
multiple objects offline. This feature is commonly referred to as “Rack Awareness”. The
screenshot shows component placement when three servers in each rack are configured
as separate vSAN fault domains.

Create Fault Domains

My technical books

I am great to finish my final goal in 5 years, published the three technical books which are related to different vendors, eg Dell EMC, Cisco and VMware.

2018 – Storage Migration – Hybrid Array to All-Flash Array
2016 – Cisco UCS Cookbook
2015 – Mastering VMware vSphere Storage

VMware vSphere Platinum

  • World’s Most Secure Compute Platform for All Workloads.
  • Secures infra and apps leveraging the hypervisor and the power of machine learning.
  • Built-in, operationally simple, with minimal overhead or impact on performance.
  • Focus on the ‘known good’ state – apps run as they should.
  • Enables collaboration amongst vSphere Admins and Security, Compliance and Application teams.

What’s new – vSphere 6.7 Update 1

What’s new of vSphere 6.7 Update 1Fully Featured HTML5-based vSphere Client

  • vCenter High Availability
  • Top-N charts
  • Search
  • Alarms
  • vSphere Update Manager (VUM)
  • Key Management Servers
  • System Configuration
  • Complete remaining advanced workflows for below:
    Content Library
    Deploy OVF
  • vCenter Server Extensions

My tech article – “Storage Migration – Hybrid Array to All-Flash Array”

I am great to announce my new tech article “Storage Migration – Hybrid Array to All-Flash Array” published on Dell EMC Education Website. This article discusses how to migrate data from Dell EMC VNX storage into Dell EMC Unity All-Flash storage. A sample of the environment is presented that includes Microsoft Windows platform and VMware Hypervisor which are running on Cisco UCS Unified Computing System. Determining the best migration methodology and how to prepare for data migration is also discussed. Here is my article download link.

What’s new in Dell EMC PowerPath for PP/VE 6.3

Dell EMC PowerPath is host based software that provides automated data path management and load balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It enables you to meet your aggressive service levels with the highest application availability and performance, and
with all the advantages of the industry’s leading information storage systems. The Dell EMC PowerPath family includes PowerPath Multipathing for physical environments, as well as Linux, AIX, and Solaris virtual environments, and PowerPath/VE Multipathing for
VMware vSphere and Microsoft Hyper-V virtual environments.

What’s new in PPVE 6.3:

  • Load balancing of XCOPY SCSI commands
    Load balancing XCOPY command optimizes VM operations like cloning
    One storage port is not loaded with all XCOPY commands
    XCOPY load balancing improves performance
  • Support for new storage array/version
    EMC Unity Storage System
    Support for EMC Unity storage systems running Operating Environment 4.2.x, 4.3.x,
    and 4.4.x.
    XtremIO Firmware Support
    Support for XtremIO X1 running firmware version 4.0.25-27 with LUNs having sector
    size of 512 bytes and XtremIO X2 running firmware version 6.0.1 with LUNs having
    sector size of 512 bytes.
    VPLEX GeoSynchrony Support
    Support for VPLEX GeoSynchrony 6.0.1 P07.
    Dell SC Series Array Support
    Support for Dell SC arrays running SCOS 6.7.x and SCOS 7.2.x.
    VNX/VNXe OE Firmware Support
    Support for VNX /VNXe Block running OE 05.33.x and below.
  • VMware vSphere ESXi Support
    Support for VMware vSphere ESXi 6.5 U1 and ESXi 6.7.

Conceptual Micro-Segmentation and Security Design with VDS

  • Foundational micro-segmentation can be deployed with the VMware vSphere Distributed Switch (VDS) without full stack NSX components, such as distribute logical router (DLR), NSX Edge services gateways, and others. A NSX Manager VM is the only VM that needs to be deployed to implement Distributed Firewall (DFW), which is non-disruptive.
  • This model can be deployed in an existing vSphere environment without changing the underlying network topology or physical network configuration, like MTU or routing, and allows applications to be segmented based on their security requirements and not on physical constraints like IP addresses or VLAN.
  • A physical router is required and provides a default gateway to the workload VMs
  • Optionally, this model can include an NSX Edge firewall. In this case, the default gateway for the workload VMs will be the internal interface of the edge services gateway.
  • The following figure shows the conceptual design of the distributed firewall in a vSphere Distributed Switch environment with NSX Edge firewall.

Blog at

Up ↑