Data Migration by EMC PowerPath Migration Enabler (PPME)

photoEMC PowerPath Migration Enabler (PPME) is a migration tool that enables non-disruptive or minimally disruptive data migration between storage systems or between logical units within a single storage system. Migration Enabler resides on the host and allows applications continued data throughout the migration process. Migration Enabler works independently of PowerPath multipathing. However, PowerPath must be
installed.

PowerPath Migration Enabler integrates with other technologies to minimize or
eliminate application downtime while migrating data.

I will show you how to perform the data migration using PPME. This demonstration environment includes one SQL 2008 server, two Brocade 300B SAN Switch, one source array CLARiiON CX4-120 and one target array VNX5200. One source volume (150GB) are mounted on this SQL server, the physical system diagram as below.

fsm_diagram

SQL Server – Microsoft Windows 2008 R2 SP1 + SQL Server 2008 R2 SP2  and installed EMC PowerPath 6.0 SP2
Source Array – EMC CLARiiON CX4-120c (FLARE 30)
Target Array – EMC VNX5200 (Block, VNX OE 32)
Brocade 300B (FOS 6.2) x 2

Now we start to setup the data migration, PowerPath Migration Enabler is a component of PowerPath, you can see it when you install EMC PowerPath. PowerPath and PowerPath Migration Enabler is required the license to enable.

03

NOTE: Make sure that EMC PowerPath Capabilities displays All.

18

The following is the migration procedure by using PowerPath Migration Enabler,
1. Firstly assign one target LUN (equal or larger capacity with source LUN) into SQL server.
2. Note the names of your source and target devices.
3. Setup the host copy session by PowerPath Migration Enabler.
4. Starts the host copy session.
5. After the host copy session is successfully completed, swap the source and target LUN.
6. Commit the LUN swapping.
7. Cleanup the host copy session.
8. Remove the source LUN in SQL server.

Step 1

After assign the new LUN into SQL server, you need to online the disk in Disk Management. Disk 1 is source LUN and Disk 2 is target LUN.

04

Step 2

Executes the PowerPath command “powermt display dev=all”, you can see Pseduo name in the output result. harddisk1 is the source LUN, harddisk2 is the target LUN.

05-1

Step 3

Executes the PPME command to setup the host copy session, “powermig setup -src <source_pseudo> -tgt <target_pseudo> -techType hostcopy”

NOTE: 3 is Migration session ID.

06

Step 4

Executes the PPME command to start the session, “powermig sync -handle <id>”. If you want to check migration state, you can check it by “powermig query -handle <id>”  or “powermig info -handle <id>”. During the migration, the SQL service is still running.

Migration speed is controlled by the throttle value associated with the session. The throttle ranges from 0-9, 0 is the fastest mode and 9 is the slowest.

16

You can change the throttle value “powermig throttle -handle <id> -throttlevalue 1”

08

Step 5

When the migration is successfully completed, you can see the state in sourceSelected.

09-1

Now you can swap the source and target LUN. To make sure the SQL service is non-disruptive, at the time time I start one SQL query in SQL server during the LUN swapping.

powermig selectTarget -handle <id>

10-1

After swap the source and target LUN, you can see the state in targetSelected. And the SQL service is still running and non-disruptive.

14

Step 6/7

Next check whether performance is acceptable on target LUN. If it is no probem, you can commit the migration session: powermig commit -handle <id>

NOTE: You cannot fallback after commit the migration session.

Then cleanup the migration session, powermig cleanup -handle <id>

12

In this moment, you can execute powerpath command again “powermt display dev=all”, you can see the harddisk1 change to be target LUN, harddisk2 change to be source LUN.

13-1

04

Now Disk 1 is changed to be target LUN.

Step 8

Finally you can remove the source LUN in SQL server, the data migration is completed.

If you have questions about PowerPath Migration Enabler Administration, you can look up the information material on support.emc.com.

Optional Information

If the target array is EMC Unity, PowerPath Migration Enabler is also supported on it.

17

REFERENCES
The following documents can be found on EMC Online Support:
EMC Unity: Introduction to the EMC Unity Platform
EMC Unity: Replication Technologies
EMC Unity: Compression
Migrating to EMC Unity with SAN Copy: A “How-To” Guide

Summary

When migrating servers from one EMC storage system to another, there are basically two options: using storage features like SAN Copy, MirrorView/S or MirrorView/A. If your system only has limited service down time for data migration, PowerPath Migration Enabler is a good option, it is also supported on Microsoft Cluster service.

Advertisements

Cisco HyperFlex Systems Dashboard

When you login into HyperFlex Dashboard by administrator, that dashboard is showing the current status of the cluster in UCS Manager.

01

Click Performance in the side menu, it shows the performance display for the servers in the cluster.

03

Click Virtual Machines in the side menu, it shows the status of each virtual machine is runing in the cluster.

06

08

01

Image Builder in VMware vSphere 6.5

Prior to vSphere 6.5, AutoDeploy and Image Builder sevice can be executed by VMware PowerCLI only. Now you can build a custom image within the vSphere Web Client. In following demonstration, I was able to add custom drivers and remove drivers from the original VMware ESXi 6.5 ISO. Finally one new custom image was created then exported as ISO.

Firstly you need to start the ImageBuider service manually in vSphere Web Client, go to Administration > Then at System Configuration click Services > Select ImageBuilder Service, click the Actions menu, and select Start. Then Auto Deploy GUI will visible in vSphere Web Client, both the Image Builder and Auto Deploy services must be running. You should have the ImageBuilder Service up and running.

01

03

Then clicks the Auto Deploy icon in the welcome page, go to Software Depots tab first.

04

Add a software depot first.

05

Then import VMware-ESXi-6.5.0-4564106-depot.zip (original vSphere 6.5 image from VMware website) into it.

06

07

Next import the new Emulex driver into it, then you can start to create new image profile.

10-2

Input name and description, then Next.

10

Choose the ipfc Emulex driver on Software deport menu and select it.

17

Select the VMware ESXi 6.5 zip you have uploaded and deselect the drivers you want to exclude.

11

Then clicks the Finish button.

12

The new ESXi 6.5 Images created.

13

You can generate ISO or Zip format.

14

Clicks that Generate image button to geneate new image.

15

Once completed, you can download this image.

16

vCenter Server Appliance File-Based Restore

After you backup the configuration of vCenter Server Appliance, you can restore the backup file from vCenter Server Appliance 6.5 Installer.

vc11

vc12

vc13

Select the restore location, then Next.

vc15

vc16

Input the target ESXi host or vCenter Server, then Next.

vc18

Input the restore VCSA Name and Root password, then Next.

vc21

Select the deployment size, then Next.

vc22

Select the target datastore, then Next.

vc23

Configure the VCSA network settings, then Next.

vc24

vc25

Click Finish button, it start to deploy a new VCSA at the target host.

vc27

After finish the deployment, click Continue.

vc28

Go to Stage 2, it starts to restore the configuration into new VCSA. Then Next.

vc29

Please shut down the original backup appliance before click Finish button.

vc30

Then Ok.

vc31

Then it starts to restore the backup.

vc38

vc39

After the restore complete, you can access this VCSA by vSphere Web Client.

vc40

vCenter Server Appliance File-Based Backup

VMware vCenter 6.5 has one new feature which is the native file-based backup and restore. It supports the backup of the vCenter Server Appliance or Platform Services Controller (PSC). You can start the backup in the VMware vSphere Appliance Management Interface (VAMI).

00

We can select different type of protocol for backup location, eg FTP(s), HTTP(s) or SCP.

vc3

You can select which parts need need to be backup. It includes vCenter Server Appliance inventory and configuration. There is an option to also backup Historical data (Stats, Events, Alarms, and Tasks).

vc4

vc6

vc8

vc10

Cisco HyperFlex Systems Architecture

HyperFlex Portfolio

03

Dynamic Data Distribution

  • Systems Built on Conventional File Systems Write Locally, Then Replicate, Creating Performance Hotspots
  • HX Data Platform stripes data across all nodes simultaneously, leveraging cache across all SSDs for fast writes
  • Balanced space utilization: no data migration required following a VM migration

01

Non-Disruptive Operations

  • Stripe blocks of a file across servers
  • Replicate one or two additional copies to other servers
  • Handle entire server or disk failures
  • Restore back to original number of copies
  • Rebalance VMs and data post replacement
  • Rolling software upgrades

02

High Resiliency, Fast Recovery

  • Platform can sustain simultaneous 2 node failure without data loss; replication factor is tunable
  • If a node fails, the evacuated VMs re-attach with no data movement required
  • Replacement node automatically configured via UCS Service Profile
  • HX Data Platform automatically re-distributes data to node

04

Cisco HyperFlex Monitoring and Reporting

05

Upgrading the Operating Environment (OE) Software on EMC UnityVSA

The Unity storage system can have the following types of software updates:

  • Operating Environment (OE) software (also called, Unisphere)
  • Disk firmware
  • Language packs

In this post we will learn how to upgrade the Operating Environment (OE) Software on EMC UnityVSA. Firstly we need to download OE Image “UnityVSA-4.1.0.9058043.tgz.bin.gpg” at http://support.emc.com. Firstly login into EMC Unisphere by Administrator and then click the System Settings.

01

Then choose Software Updates on Software and Licenses, click Start Upgrade.

NOTE: The running software version is 4.0.1.8404134.

02

Perform Health Checks before the Operating Software upgrade, make sure it doesn’t have any error exist.

05

Then Next.

06

Then Browse to select the Operating Environment (OE) Software.

07

Then Next.

08

Then Next.

Select this option will automatically reboot your storage processors during the upgrade and finalize the new software.

Unselect this option will pause the upgrade after all non-disruptive tasks have completed. User input is required to manually reboot the storage processors and finish the upgrade.

10

Then you can view the summary, clicks Finish button to start Operating Environment (OE) Software upgrade (4.0.1 to 4.1.0).

11

Then you can see the upgrade progress and required time.

12

15

NOTE: The connection to the storage system has been lost. This is expected during software upgrade. The connection will be restored automatically when the storage system becomes available. Do not reboot the storage processors.

16

When Operating Environment (OE) Software upgrade is successfully completed, you can click Reload Unisphere button to reload the EMC Unisphere.

18

After login into EMC Unisphere again, you can see the Software Version is 4.1.0.9058043 on System View.

19

20

Comparison of HP 3PAR Online Import and Dell/EMC SANCopy

In the market each storage vendor has their unique technology features for data migration. For example, Dell/EMC vPLEX Encapsulation, MirrorView/S/A, SANCopy, HP 3PAR Online Import and 3PAR Peer Motion etc. Today we will discuss the difference between Dell/EMC SANCopy and HP 3PAR Online Import, and list out their advantage and disadvantage.The following diagram is the detail architecture for data migration by EMC SANCopy and HPE 3PAR Online Import.

The architecture diagram for migration host by EMC SANCopy;
Source Array – HP 3PAR StoreServ 7200 (OS 3.2.2)
Target Array – EMC VNX5200 (VNX OE 33)
SAN Switch – 2 x Brocade DS-300B
Migration Host – Micrsoft Windows 2008 R2
Migration Method – EMC SANCopy (Push Mode)

fsm_diagram

Execute the data migration by SAN Copy Create Session Wizard on EMC Unisphere.

01

The architecture diagram for migration host by HP 3PAR Online Import.
Source Array – EMC VNX5200 (VNX OE 33)
Target Array – HPE 3PAR StoreServ 7200 (OS 3.2.2)
Migration Host – Micrsoft Windows 2008 R2
Migration Management Host – HP 3PAR Online Import Unity 1.5 & EMC SMI-S provider 4.6.2
SAN Switch – 2 x Brocade DS-300B
Migration Method – HP 3PAR Online Import
fsm_diagram2

Execute the data migration by HP 3PAR Online Import Utility CLI Commands.

addsource -type CX -mgmtip x.x.x.x -user <admin> -password <password> -uid <Source Array’s WWN>
adddestination –mgmtip x.x.x.x –user <admin> –password <password>
createmigration -sourceuid <Source Array’s WWN> -srchost <Source host> -destcpg <Target CPG> -destprov thin -migtype MDM -persona “WINDOWS_2008_R2”

10

Below table is the comparison of EMC SANCopy and HP 3PAR Online Import;

table

And the following is the pros and cons of each migration method.

EMC SANCopy

Pros:

  • It can be migrated each source LUN to the target array one by one.
  • Any FC ports can be configured as SANCopy port on each Storage controller, and SANCopy port and host port can be running at the same time.
  • All migration operation can be executed on EMC Unisphere (VNX management server), optional migration server installation is not required.
  • SANCopy license is bundled on VNX storage.

Cons:

  • SANCopy is not supported incremental mode if the source array is 3rd party model.

HP 3PAR Online Import

Pros:

  • The destination HP 3PAR StoreServ Storage system must have a valid HP 3PAR Online Import or HP 3PAR Peer Motion license installed. By default it has 180 days Peer Motion temporary license installed.

Cons:

  • Each migration definition cannot be migrated each source LUN to the target array one by one, ie it will migrate all LUNs to target array if it has 3 three LUNs on EMC Storage Group when it starts migration session.
  • All migration definition can only be executed on 3PAR Online Import Unity which is the other management host for data migration.

Microsoft Windows Cluster Migration by using EMC PowerPath Migration Enabler (PPME)

In previous post we learnt how to migrate data on Microsoft Windows by using PowerPath Migration Enabler (PPME). In this post, I will show you how to perform the data migration on Microsoft Windows Cluster by using PPME. The physical system diagram as below.

fsm_diagram

SQL Cluster Node1 – Microsoft Windows 2008 R2 SP1 + SQL Server 2008 R2 SP2 and installed EMC PowerPath 6.0 SP2
SQL Cluster Node2 – Microsoft Windows 2008 R2 SP1 + SQL Server 2008 R2 SP2 and installed EMC PowerPath 6.0 SP2
Source Array – EMC CLARiiON CX4-120c (FLARE 30)
Target Array – EMC VNX5200 (Block, VNX OE 32)
Brocade 300B (FOS 6.2) x 2

The following is the migration procedure by using PowerPath Migration Enabler,
1. Firstly assign one target LUN (equal or larger capacity with source LUN) which is shared to each SQL Cluster Node.
2. Note the names of your source and target devices.
3. List Cluster resource groups and configure one of your cluster resources group for migration
4. Setup the cluster migration session by PowerPath Migration Enabler.
5. Starts the cluster migration session.
6. After the cluster migration is successfully completed and commit the LUN swapping.
7. Cleanup the host copy session.
8. Remove the source LUN in SQL server.

You can see the Cluster Disk is Volume S in Failover Cluster Manager.

01-2

Step1

Assign one target LUN which is shared to each SQL Cluster Node, Disk 2 is the target location.

06-1

Step2

Executes the PowerPath command “powermt display dev=all”, you can see Pseduo name in the output result. harddisk1 is the source LUN, harddisk2 is the target LUN.

05-2

Step3

Executes PowerPath command “powermigcl display -all” to list out Cluster resource groups.

NOTE: For help, you can execute command “powermig help”

02

Configure one of SQL cluster resources group for data migration, then list out Cluster resource groups again. You can see the Status of SQL Server (MMSQLC) is Configured.

03

Now you will notice that new cluster resource is added named PPME <Disk_resource_name>  into configured cluster resource group.

04-2

And each Cluster Disk is dependent on the PPME resource.

05

Step4

Now setup the migration handles c5 for the cluster disks in cluster resource group SQL Server. Executes the PPME command to setup the migration session, “powermig setup -src <source_pseudo> -tgt <target_pseudo> -techType hostcopy -cluster -no”

07

Step5

Start the migration c5

08

Step6

When the migration is successfully completed, you can see the state in sourceSelected.

10

NOTE : Since we are migrating physical disks in a MSCS environment there is no need to use the “SelectTarget” option with powermigcl migrations.We can commit the migrations immediately once the sun is 100% complete

Now you can commit the migration session.

13

15

NOTE: You cannot fallback after commit the migration session.

The SQL service is still running and non-disruptive during commit the migration.

12

In this moment, you can execute PowerPath command again “powermt display dev=all”, you can see the harddisk1 change to be target LUN, harddisk2 change to be source LUN.

13-2

Step7

Then you can cleanup the migration session, powermig cleanup -handle <id>

17

After cleanup the session, the PPME cluster resource also is removed in Cluster resources group.

19-2

Optional – SQL Cluster failover test

You can move the SQL service to Node 2. The SQL service can successfully failover to Node 2.

20-2

Step8

Finally you can remove the source LUN in SQL server, the data migration is completed.

If you have questions about PowerPath Migration Enabler Administration, you can look up the information material at support.emc.com.

Create a free website or blog at WordPress.com.

Up ↑