VxRail 4.0 – Scale Out

The above is the physical diagram of VxRail Cluster (3 nodes). In this post I will show how to add one VxRail Appliance into this VxRail Cluster (From 3 nodes to 4 nodes).

NOTE: The model of each VxRail Appliance is E460.

Before the node expansion, you need to verify each Appliance is running in health in dashboard of VxRail Manager.

The above is the final physical diagram of VxRail Cluster after scale out. Now we start the node expansion. You have just mounted a new VxRail Appliance (E460) and cabled it up to the top of each rack switch. When you power it on you can see a notification appear in the top left corner of VxRail dashboard. Click “Add Node“.

When you initially configured your VxRail Appliance, you specified an IP pool for ESXi, vMotion and vSAN. You can see that there available IP addresses in these pools, so the only additional action is to set an ESXi password. Click the scroll bar, then click the “ESXi Password“. Enter the ESXi and vCenter Server Password, click “Next“.

Click the checkbox confirm that your new VxRail Appliance has been provisioned with the appropriate IP addresses and hostnames. Then click “Next“.

VxRail Manager has validated the ESXi, vSAN and vMotion IP addresses, and you are now ready to build your new VxRail Appliance. Click “Build“.

You can monitor the expansion is in progress in the dashboard.

Now your new VxRail Appliance has been built and the cluster has been expanded. Click the “Health” tab to examine the cluster and verify the new node you just added.

You can see that your cluster has now been extended by a single node (from 3 nodes to 4 nodes). In this screen that provides metrics for the cluster and individual nodes including IOPs, CPU, memory usage and storage utilization. Click the new node “DW7LHB200000000” you can see “Logical” information specific to the new appliance you just added to the cluster.

You also can see “Phyiscal” information specific to the new appliance you just added to the cluster. There are now four Dell PowerEdge based nodes (E460) showing in this view, including the new node “DW7LHB200000000“. This ends the Scale Out of VxRail 4.0 Appliance.

Posted in EMC | Leave a comment

Demo – Upgrade to VxRail 4.0

Upgrade VxRail’s software components is accessible through the configuration menu. Click the “Configuration” button to proceed. Before updating the software components installed on your VxRail Appliance, you must first update VxRail Manager. Click “Update“, then select “Upload local version” from the dropdown menu to start this process.

Since you have already download the VxRail Manager 4 Installation files from Dell EMC support site, you can now upload this file from your local drive. Select the file and then click “Open“.

Once the upload process is completed, click “Install“.

Before VxRail Manager is uploaded, the system perform a pre-check to ensure. Once the pre-check completes, click “Upgrade NOW“.

Congratulations! VxRail Manager has been successfully updated to 4.0. Click “Close“.

Then you need to login VxRail Manager again. Input the login credentials, then click “Authenticate“.

Now that VxRail Manager has been updated. the rest of the components of your VxRail cluster can be upgraded.

This demo assumes you have already downloaded the installation. Click “Internet Upgrade” then select “Local Upgrade” from the dropdown menu.

Select the installation file “VXRAIL_COMPOSITE-4.0.04631185” from you local drive, then click “Open“.

Before installing, VxRail will performn a readiness check to make sure the status is green across all the software components to be upgraded. Once the readiness check completes, click “Continue” to begin the installation.

The software upgrade is now completed. During upgrade, VxRail performs a series of automated post-checks and upgrade hooks.

Click “Details” for more information, or click “Refresh” to update your screen.

To view the installed version of numbers of the components that VxRail is comprised of, click the “Installed Versions” link.

Reference Demo

Posted in EMC | Leave a comment

Upgrading the Operating Environment (OE) 4.1.0 to 4.1.1 on EMC UnityVSA

First, perform a system health check. A health check is a series of checks on the state
of your storage system. Performing a system health check helps ensure that no
underlying problems exist that may prevent a successful update. Then obtain and
install software updates.

The Unity storage system can have the following types of software updates:

  • Operating Environment (OE) software (also called, Unisphere)
  • Disk firmware
  • Language packs

Before the Operating Environment upgrade, you need to download Unity OE upgrade file UnityVSA-4.1.1.9138882.tgz.bin.gpg at https://support.emc.com. Then login into Unisphere by Administrator.

1. Select the Settings icon (in the top right of the page), and then select
Software and Licenses > Software Updates.

2. Select Perform Health Checks. Then Next.

3. After the health check is passed successfuly, select Start Upgrade. Then Browse the software file and click Next to start the upgrade.

4. Decide whether you want the storage processors to reboot automatically during
the upgrade. Then Next.

NOTE: The default option during a software upgrade is to automatically reboot both storage processors, one-at-a-time, as soon as the software upgrade image is staged and the
system is prepared for upgrade. If you like tighter control over when the reboots
happen, you can clear this option so that upgrade can be started and staged, but
neither storage processor will reboot until you are ready.

5. Review the planned upgrade and select Finish.

6. You can see the software upgrade progress.

7. As expected during the software upgrade, the management connection to the system has been temporaily lost. The connection will be restored automatically when the storage system becomes availabe. Do not reboot the storage processors.

8. After the OE upgrade is completed successfully, click “Reload Unisphere” to reload the Unisphere.

9. When you login Unisphere again, you can see software version is 4.1.1.9138882 in Settings of Unisphere. This software upgrade is finished.

Posted in EMC | Leave a comment

Unisphere Central v4.0 SP3 Upgrade

Unisphere Central is a virtual appliance that enables administrators to remotely
monitor the status, activity, and resources of the storage system available on your
network. The Unisphere Central server runs within a VMware virtual environment that
includes at least one ESX or ESXi host. The Unisphere Central server obtains
aggregated status, alert, capacity, and performance information from all the systems
Unisphere Central is monitoring.

The Unisphere Central environment consists of a Unisphere Central server running on an ESX/ESXi server (standalone or through vCenter), VNXe, VNX, CX4, vVNX, Unity and UnityVSA storage systems, and a remote system to access the Unisphere Central server.

Unisphere Central version 4.0.3 has been updated to support the following builds of
previously-supported products:

  • Support to monitor Unity and UnityVSA storage systems running version 4.1.0.
  • Support to monitor VNXe storage systems running VNXe3200 version 3.1.5 and VNXe1600 version 3.1.7.
  • Support to monitor VNX2 series storage systems running VNX OE for Block
    05.33.009.5.155 and VNX OE for File 8.1.9.155 and VNX OE for Block
    05.33.009.5.184 and VNX OE for File 8.1.9.184.
  • Support to monitor CX4 storage systems running OE version 04.30.000.5.529.

Now we start to upgrade Unisphere Central v4 SP2 to SP3.

Firstly you download the upgrade patch “Unisphere_Central–upgrade-3.0.0.21364-4.0.3.22651-RETAIL.tgz.bin.gpg” at https://support.emc.com. Go to the Settings of Unisphere Central, click Upload Candidate to upload the that patch.

Click Start Upload.

After finish the uploading, then click Install Candidate to start the upgrade.

Before starting this upgrade. It is highly recommended that you create a snaphost of your Unisphere Central VM using VMware Snapshot. It starts to upgrade when you click Yes.

After it finishes the upgrade successfuly, it will start to reboot Unisphere Central automatically.

After the reboot is completed, you need to restart the browser and login Unisphere Central again. You can see the it displays 4.0.3 version in Unisphere Central.

Posted in EMC | Leave a comment

EMC VxRAIL Overview

The hyper-converged VxRail Appliance features a clustered node architecture that consolidates compute, storage, and management into a single, resilient, network-ready HCI unit. The software-defined architectural structure converges server and storage resources, allowing a scale-out, building-block approach, and each appliance carries management as an integral component. From a hardware perspective, the VxRail node is a server with integrated direct-attached storage. No external network components are included with the appliance; VxRail leaves that up to the customer (although VCE can bundle switch hardware and NSX can function as an integrated option for SDN). This allows the VxRack to seamlessly integrate into the existing network infrastructure, preserving existing investment in network infrastructure, processes, and training. Organizations benefit from the simplicity of the appliance architecture that expedites application deployment while providing the same data services expected from high-end systems.

  • The Appliance architecture design center is “simple.” VxRail is simple to acquire, deploy, operate, scale, and maintain.
  • The Appliance system-level architecture uses SDS and multi-node servers with integrated storage and can leverage whatever network infrastructure is available. Appliance architecture provides low-cost and low-capacity entry points with simple configurations that can easily scale.
  • Appliance-architecture workload and business requirements focus on simplicity and the ability to start small and grow easily. VDI and productivity applications are examples of the initial workloads deployed in appliances.

When you combine these technologies you get hyper-converged infrastructure, which integrates compute, software defined storage, networking, and virtualization into a single build block for the data center. It enables compute, storage, and networking functions to be decoupled from the underlying infrastructure and run on a common set of physical resources that are based on industry-standard x86 components.

ARCHITECTURE – VXRAIL VS. VSA

Hyperconverged storage solutions require the installation of a virtual storage appliance on each host. However, in the case of VSAN, because it is embedded in the ESXi kernel, all the Virtual SAN intelligence are already built in to the hypervisor and there are no additional components to install.

Because it is embedded in the hypervisor VSAN provides the shortest path for I/O, making storage operations optimally efficient and does not consume CPU resources unnecessarily. Even during maintenance operations and VM migrations, storage operations are seamlessly handled.

VxRail VSAN provides the software defined storage layer. There are several benefits to VSAN but the primary 2 main reasons VSAN is ideal for a hyper-converged infrastructure appliance are as follows:

1.Kernel Integration – The software controlling the storage is integrated into the hypervisor. Why is this so important? The alternative to kernel integration is using a Virtual Storage Appliance (VSA) where the storage system must run separately as a guest virtual machine (VM) on top of the hypervisor as opposed to being a part of it. The issue with these types of implementations is that the VSA VMs content with resources against the workloads being supported. The VSA requires a lot of tuning and balancing to get it optimized to off-set I/O contention such that workloads can perform optimally which requires expertise and can be time consuming and a challenging exercise

2.Storage Based Policy Management (SBPM) – When making a decision to deploy a software defined data center (SDDC) approach it is very important that any platform you consider have the capability of enforcing top down policies between the various virtual layers (compute, storage, network) and provide control over the service objectives in all of these layers. VSAN enables the setting of storage policies at the kernel layer with seamless integration higher up in the management and orchestration layer such as with the vRealize Suite or those provided via 3rd party tools so that when services are rendered all of the associated services and their attributes cascade throughout the infrastructure. SBPB provides top to bottom control of how the workloads perform in the appliance.

  • Technical advantages: VSAN code is in the vSphere kernel
  • No need to install Virtual Storage Appliances (VSA)
  • CPU utilization <10%
  • No reserved memory required
  • Provides the shortest path for I/O
  • Seamlessly handles VM migrations
  • Operational advantages: VSAN is built with and for vSphere Storage Policy Based Management
  • No new management console
  • No planning out or carving up disk pools
  • Virtual SAN self-tunes to keep policy compliance

VSAN is integrated into Kernel of vSphere

  • More efficient and performant than a Virtual Storage Appliance

Storage Policy Based Management

  • Storage policies are directly embedded onto storage object
  • Data protection, Performance/QoS, Data Reduction
  • Policies follow your virtual machine
  • Seamlessly integrated with vRealize

VxRail Appliance software is customer upgradeable via a fully automated and validated process as of VxRail 4.0 software. The software upgrade is initiated via download from VxRail Manager, and it automatically downloads all software ready to be updated including VxRail Manager VM, vCenter Server and PSC, ESXi hosts, and ESRS. The automated process consists of four steps including download of the VxRail software, a readiness check, the actual update of the software, and finally, validation and upgrade post checks. The final validation step ensures the upgrade was successful, and the VxRail Appliance is fully functional at the new, upgraded version of software. Below diagram shows the four automated steps of a customer executed VxRail Appliance software upgrade. Step 3 is performed one node at a time, where the ESXi host is placed in maintenance mode, and using vMotion, the VMs are moved to other nodes making the upgrade process non-disruptive. To do this automatically vSphere Enterprise Edition Plus is required. If the VxRail appliance is not currently running VxRail 3.5 or using vSphere Standard Edition license, services are required to perform the upgrade. (It is best to verify with Dell EMC services that an environment meets the requirements prior to an upgrade.)

Each VxRail are included the following features,

  • VMware vSphere Replication
  • Dell EMC RecoverPoint for Virtual Machines
  • Integrated Backup and Recovery with vSphere Data Protection (VDP)
  • VMware Stretched Cluster

EMC VxRail hyper-converged infrastructure appliances installation
https://wuchikin.wordpress.com/2016/11/27/emc-vxrail-hyper-converged-infrastructure-appliances-installation/

VxRail Cluster Expansion – Scaling Out
https://wuchikin.wordpress.com/2016/11/27/vxrail-cluster-expansion-scaling-out/

EMC RecoverPoint for Virtual Machine Overview
https://wuchikin.wordpress.com/2017/03/04/emc-recoverpoint-for-virtual-machine-overview/

EMC RecoverPoint For VMs 5.0 Deployment
https://wuchikin.wordpress.com/2017/03/04/emc-recoverpoint-for-vms-5-0-deployment/

EMC vRPVM Cluster configuration
https://wuchikin.wordpress.com/2017/03/05/emc-vrpvm-cluster-configuration/

Posted in EMC | Leave a comment

EMC vRPVM Cluster configuration

In last post, we learnt vRPA OVA deployment. In this post, you will use the Deployer tool to deploy RecoverPoint for VMs. This Deployer tool is integrated into on the vRPAs that you deployed vRPA. The tool will allow users to deploy the RecoverPoint for VMs with guided pre-deployment validation as well as install Splitters and Plugin without any intervention from the user.

1. Open the browser and input the vRPA management IP address which is defined in vRPA deployment. Click EMC RecoverPoint for VMs Deployer to get started.

01

2. You will see the RecoverPoint for VMs deployer and the available tasks that can be performed are listed at the bottom. There are four primary sets of wizards: Click Install a vRPA Cluster to get started.
Install vRPA cluster
Connect newly installed clusters
Non-disruptive Upgrade
More actions: Network modifications, adding, removing and replacing vRPAs.

02-1

3. You have the option to check if the RecoverPoint for VMs version you are about to deploy is the latest and recommended version. If you have EMC Online Support credentials, you can enter those or if you have downloaded a newer version, you can provide that manually. Alternatively, you can continue by selecting Do not check
version requirements to proceed with the deployment. Click Next to continue.

Note: There are Import and Export options shown at the top right corner of the screen. Those can be used to import the cluster configuration in JSON format or export them in order to rapidly deploy clusters in the future. You can build the first cluster manually, then import and modify its JSON to deploy additional clusters in a more automated way.

03

4. The Deployer tool will attempt to automatically pre-populate the IP address of vCenter and the default port of 443. If those are different, you can manually update them. Specify the vCenter credentials, click Connect to retrieve the certificate.

04-1

Deployer will reterive the certificate from vCenter, review and then click Confirm to proceed.

05

5. After connectivity with vCenter is established, the tool will run the pre-installation validation. It will print out a list of tests performed and possible issues. If the tool is able to address any of the issues, it will, otherwise, it will print out instructions on what needs to be done manually. The environment is confirmed as ready for deployment. Review the list of tests and click Next.

06

6. Enter the name of the RecoverPoint for VMs, this could be any name you choose, it’s recommended to use a name that indicates the vCenter environment where the RecoverPoint for VMs cluster is being deployed and even a sequence number if that are multiple RecoverPoint for VMs clusters sharing the same vCenter environment.

Also, here you can select the communication between vRPAs. There are two options, no authentication or encryption which will still use the proprietary communication protocol or you can enable authentication and encryption. Authentication refers to the key authentication among the appliances (not user authentication) and encryption refers
to encrypting data path for replicated data. In this configuration, select Not authenticated nor encrypted.

In General Settings, select the Time Zone, input the IP address of DNS Servers and NTP Servers.

By default the vRPA where you are accessing Deployer is selected (vRPA1). However, if you want to deploy a cluster with multiple vRPAs, you can select here additional vRPAs which you deployed earlier. Select Cluster1_vRPA2 to select the 2nd vRPA.
After the vRPAs which will participate in the cluster have been selected, click Retrieve Associated Datastores to select a datastore that can be used for the Repository volume; a 6GB VMDK that will be auto-provisioned by deployer and used to save the cluster settings. Deployer will reterive all available and shared datastores that can be used to provision a 6GB Repository volume for the RecoverPoint for VMs cluster that is being deployed. Select your target Datastore and click Next to proceed.

08

7. You will specify the Network adapters configuration and IP addresses for the cluster.

In the Cluster Management IPv4 field, type vRP Cluster Management IP addresss

In Network Adapters Configuration:
In RecoverPoint for VMs 5.0 and later, you can combine LAN, WAN and Data interfaces on 1 interface, 2, 3 or 4. In this deployment, we will deploy LAN and WAN on the same interface and a single separate interface for Data (iSCSI).
Note: All those interfaces (fields) can be reduced to one.

In the Topology for WAN and LAN list box, select WAN and LAN on same network adapter

In the Topology for Data list box, select Data (iSCSI) on separate network adapter from WAN and LAN

In Network Mapping, these mappings will depend on the environment and the selection you made above in the Network Adapters Configuration. LAN mapping is dimmed out as it is based on the network chosen during the OVA deployment.

In Netmask Configuration, specify the gateways to use for LAN, and WAN. The required one here is for LAN. WAN gateways can be added now or later on when connecting to a remote cluster.

10

8. Now you will enter the IP addresses of the vRPAs. Depending on number of vRPAs, and the network adapter topology,the number of required interfaces, IPs and fields can be greater or less, however here in this deployment, we are using a traditional 2-vRPA with separate NICs topology.

NOTE: If you use the Export & Import JSON file for the settings, you would not need to enter these fields manually. The file exported from one cluster can be updated with the appropriate settings and used for another cluster.

In Advanced Settings, allows you to adjust the MTU size for each used network. Default is 1500, in this deployment, we will use the default. However, it’s quite common to use Jumbo frames in WAN and Data networks. You are now ready to proceed, click Install to being the deployment of the cluster.

12

9. Then the Deployment will begin, deployer will apply the settings to the vRPAs, configure the repository VMDK, create the vRPA cluster, install the Splitters on the ESXi hosts and push out the plugin.

Monitor the tasks during deployment until deployment has completed. This step can take approximately 15 minutes.

13

When complete, click Finish.

14

10. When deployment of the RecoverPoint cluster has completed. You will see the cluster in Deployer as indicated in the following screen.

16

11. The last procedure in this deployment is to register the ESX cluster where VMs will be protected with the RecoverPoint for VMs cluster. This step is necessary to ensure that communication between the vRPAs and the Splitters is established. Registered ESX clusters do not necessarily need to be the same ESX cluster where the vRPAs are deployed, those can be other ESX clusters where the Splitters will be pushed out to and VMs protected.

Login into vCenter by Administrator and go to the Home screen, click RecoverPoint for VMs.

01

12. To proceed with registering the ESX Cluster, click Administration.

04

Click ESX Clusters.  Click Add. Select your ESXi Cluster, and then click OK.

The ESX cluster registration is complete and the deployment of this RecoverPoint for VMs cluster is complete. Click Validate to ensure that the communication between Splitter(s) and vRPAs is correctly established.

05

Posted in EMC | 1 Comment

EMC RecoverPoint For VMs 5.0 Deployment

In this post we will deploy two vRPA(s) using the provided RecoverPoint for VMs OVA. Starting with RecoverPoint for VMs 5.0 and later. Before the vRPA deplyment, you need to download the vRA OVA in EMC support portal. After that you login into vCenter 6, all the tasks will be performing all tasks using the vCenter vSphere Web Client.

This demonstration will deploy RecoverPoint for VMs 5.0 in the following environment:
VMware vCenter 6.0 U2
VMware ESXi 6.0 U2
EMC RecoverPoint Virtual Appliance 5.0

The following diagram shows a high-level overview of one of the RPVM clusters in this environment.rpvm2

NOTE: You must configure the Software iSCSI Adapter on each ESXi host and then bind VMKs before proceeding with installing RecoverPoint for VMs.

1. Now we start to deploy vRPA OVA, right-click your ESX’s DataCenter and select Deploy OVF Template.

01

2. You will upload the OVF for the vRPA from local disk, then click Next.

02

3. Review the license agreement, click Accept and then click Next.

03

4. In the Name, type vRPA Node Name and click on Datacenter, where the vRPA will be deployed. Click Next.

04

5. In this installation, we will deploy the smallest 2CPU / 4GB RAM. In a production environment the size of the vRPA will depend on the required throughput and IOPS. Click Next.

The vRPA OVA comes with four configuration profiles:
• 2x CPU, 4GB RAM
• 4x CPU, 4GB RAM
• 4x CPU, 8GB RAM
• 8x CPU, 8GB RAM

05

6. The vRPA VM can be deployed on any shared datastore that is accessible to all ESXi nodes in the vSphere cluster in order to allow HA vMotion of the vRPA among ESXi nodes.
It can be deployed using Thick (Eager or Lazy) or Thin. In the Select virtual disk format: list box, select Thin Provision Select a datastore from the list. Click Next.

06

7. In this screen you can select the vSwitch or dvswitch, you want the management interface of the vRPA to be mapped to. You can also specify whether you will use IPv4 or IPv6. Click Next.

07

8. In this screen, you will provide the vRPA management IP address. This IP address is permanent until deployment is completed and that is the IP address you will use to deploy the RecoverPoint for VMs cluster in the next deployment. Click Next.
Specify the following:
• In the IP Address field, type 192.168.1.101
• In the Subnet Mask field, type 255.255.255.0
• In the Gateway field, type 192.168.1.250

08

9. Review and confirm the selections you have made, select Power on after deployment and then click Finish to deploy the vRPA.

09

10. The deployment process will take few minutes to complete. When complete, the vRPA VM will display in the vCenter inventory. This completes this step of deploying the first vRPA.

11. Repeat the above procedure to deploy 2nd vRPA.

12. After the deployment finish, we can start to configure vRecoverPoint Cluster. In next post we will learn how to configure by Deployer.

Posted in EMC | 1 Comment