...
The RecoverPoint for Virtual Machines repository or journal volumes must be migrated to a different location.The current repository volume is corrupted or deleted.
Accidental deletion of the repository or a storage refresh
If there is more than one repository missing in the same system, run the following procedure one repository at a time. How to determine which mechanism the RPAs use in versions 5.2.x and later: Login as user admin [2] Setup > [8] Advanced options > [3] Appliance config params > [1] View appliance config param > Enter tweak_param name: t_useJiraf If the tweak is false, you are using JUKE. If the tweak is true, then JIRAF is in use.Another method is to look at the journal files (and repository if they exist). Indicates that JUKE is being used: .RPVS_Lun0000x.vmdk Indicates that JAM or JIRAF is used: IOFilter_KVOL_00001.vmdk/IOFilter_JVOL_0000x.vmdk Continue to the relevant section using the following index: Repository migration with JukeJournal migration with Juke Repository recreation with JukeRepository migration with JirafJournal migration with JirafRepository recreation with Jirafvmdk specific steps for regular datastorevmdk specific steps for vSan datastores NOTE: The following section is for RecoverPoint for Virtual Machines 5.1.x and earlier and 5.2.x and later running with the JUKE mechanism. Repository migration with Juke Repository Migration (The Repository volume is named RPVS_Lun00001.vmdk. All the rest are journals. If you want to do a migration of all files, ensure to read both the Repository migration and Journal migration sections before starting the activity.The repository is in a folder which has the Cluster Name or Cluster UID in it. This can be retrieved by viewing the cluster settings using the Installation menu. Alternatively, run get_internal_cluster_name: Power off all vRPAs in the cluster: SSH log in to every RPA using user boxmgmt and perform the following: A. Go to [5] Shutdown the vRPA by going to Shutdown or Reboot operations > [2] Shutdown RPA. B. Power off the VM using vCenter Enable SSH on one ESXi host in the cluster where the vRPAs reside, log in as user root to the ESXi, and run steps 3 to 9:Rename RPVS_Lun00001.vmdk > RPVS_Lun00001.vmdk.ignore (use mv command).Do the same for the flat.vmdk file.Wait 5 minutes.Manually create the same directory structure on the new datastore RPvStorage\ (RPvStorage\68ab5cec68d47a77 for example). If the Cluster UID is 15 characters long, add a 0 before the cluster UID. Example: RPvStorage\08ab5cec68d47a77 Manually move RPVS_Lun00001.vmdk.ignore (and -flat file) to the new datastore in the correct directory structureRename RPVS_Lun00001.vmdk.ignore > RPVS_Lun00001.vmdk.Do the same for the flat.vmdk file.Wait 5 minutes.Power on all vRPAs.Verify that the cluster is up and has access to the repository by running the admin CLI command get_rpa_states. If you have datastores that are registered for use by RecoverPoint for Virtual Machines, remove the ones from the old array and add the new ones. Journal Migration with Juke: Use the same procedure used for repository migration to migrate the journals, using journal vmdk files instead of repository. You can see the journal names from the web client plug-in, under Protection > Consistency Groups. Select the copy, and the journal name shows on the right. NOTE: The RecoverPoint plug-in does not reflect the new datastore of migrated journals under the Protection Policy. Repository recreation with Juke: Re-creating the Repository when the Repository is corrupted or has been deleted. The repository folder must be created in a folder that has the Cluster Name or Cluster UID in it. This can be retrieved by viewing the cluster settings using the Installation menu. Alternatively, by running get_internal_cluster_name): Log in to RPA1 as admin user and run command: start_maintenance_mode Select: 9) migrate_repositoryOutput should read "Switched to maintenance mode successfully." Suspend all CGs with at least one copy on the affected cluster using the suspend_group command (in 5.1.x and earlier, this requires the SE user. In 5.2, use the admin CLI user, may require running enable_advanced_support_commands first). PuTTY or SSH into ALL RPAs IN ALL CLUSTERS as admin user and detach ALL RPAs IN ALL CLUSTERS. From the main menu as boxmgmt user, choose: [4] Cluster operations [1] Detach from cluster Do you want to detach the RPA from the cluster? NOTE: RPA is rebooted when reattached. (y or n)? Y On the problematic cluster, export binary data on RPA1. From the main menu as boxmgmt user, choose the option: [2] Setup[8] Advanced options[4] Run script Paste the following signed script:For RPVM 5.2.2 or later: ZjcxN2NmYWIwYzk5MDUwNjFlNjQ1ZmI5ODM1Y2I1NzUKdW5saW1pdGVkCm5vdF9yZXN0cmljdGVk ClRoZSBpZCBvZiB0aGUgc2NyaXB0IGlzOjM1MTI4ClRoaXMgc2NyaXB0IGV4cG9ydHMgcmVwb3Np dG9yeSB0byBhIEJpbmFyeSBmaWxlLCBhcyBwYXJ0IG9mIHRoZSByZXBvc2l0b3J5LW1pZ3JhdGlv biBwcm9jZWR1cmUKS2ZpciBXb2xmc29uIGFuZCBJZGFuIEtlbnRvcgojISAvYmluL2Jhc2ggLWUK IyBCdWcgIzM1MTI4IC0gc2NyaXB0ICMxIG9mIDIKIyB0aGlzIHNjcmlwdCBleHBvcnRzIHJlcG9z aXRvcnkgdG8gYSBCaW5hcnkgZmlsZSwgYXMgcGFydCBvZiB0aGUgcmVwb3NpdG9yeS1taWdyYXRp b24gcHJvY2VkdXJlLgojIHNob3VsZCBiZSBleGVjdXRlZCBhZnRlciBzd2l0Y2hpbmcgUlBBcyB0 byB1cGdyYWRlIG1vZGUgUlBBLCBzdXNwZW5kaW5nIENHcyBhbmQgZGV0YWNoaW5nIGFsbCBSUEFz IGZyb20gY2x1c3Rlci4KIyBub3RlOiB1c2luZyAiLWUiIHRvIGV4aXQgb24gYW55IGNvbW1hbmQg ZmFpbHVyZSAKT1VURElSPS9ob21lL2NvbGxlY3Rvci9pbnRlcm5hbF9jb21tYW5kX291dHB1dApO T1c9JChkYXRlICsiJVktJW0tJWQtJVQiKQojIG1ha2Ugb3V0cHV0IGRpciBpZiBkb2VzIG5vdCBh bHJlYWR5IGV4aXN0Cm1rZGlyIC1wICRPVVRESVIKY2QgL2hvbWUva29zCiMgZGVsZXRlIHByZXZp b3VzIGV4cG9ydGVkIGJpbmFyaWVzCnJtIC1mIGt2b2xfc3RlcDEuYmluCiMgZm9yIDUuMi4yLCBj b3B5IGNvbnRyb2xfdXRpbHMgZnJvbSAvdXNyL3NiaW4KbWtkaXIgLXAgL2hvbWUva29zL2thc2h5 YS9hcmNoaXZlL2Jpbi9yZWxlYXNlLwpjcCAvdXNyL3NiaW4vY29udHJvbF91dGlscyAvaG9tZS9r b3Mva2FzaHlhL2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGlscwojIGZvciBsYXRlciBk ZWJ1Z2dpbmcgd2Ugc2F2ZSBjdXJyZW50IHJlcG9zaXRvcnkgaW4gdHh0IGZvcm1hdCBpbiBvdXRk aXIKa2FzaHlhL2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGlscyAtZHVtcE5pY2VBbGwg PiAkT1VURElSL2t2b2xfc3RlcDFfJE5PVy50eHQKIyBkdW1wIGJpbmFyeSAodG8gL2hvbWUva29z KS4gY29udHJvbF91dGlsIGxvZ3MgYXJlIHdyaXR0ZW4gdG8gaW5zdGFsbGF0aW9uX3Byb2Nlc3Nl c19sb2dzCmthc2h5YS9hcmNoaXZlL2Jpbi9yZWxlYXNlL2NvbnRyb2xfdXRpbHMgLWR1bXBCaW5B bGwgZXhwb3J0VGFyZ2V0PWt2b2xfc3RlcDEuYmluIAojIHdlIHdpbGwgb25seSBzYXZlIG9uZSBi aW5hcnkgYXQgYSB0aW1lLCBidXQgZm9yIGxhdGVyIGRlYnVnZ2luZywgYWxzbyBzYXZlIGEgY29w eSB3aXRoIHRpbWVzdGFtcCBpbiBvdXRkaXIKIyAgIGlmIHRoaXMgY29tbWFuZCBmYWlscyBpdCB1 c3VhbGx5IG1lYW5zIHRoYXQgYmluYXJ5IGZpbGUgd2FzIG5vdCBjcmVhdGVkIGJ5IHRoZSBwcmV2 aW91cyBjb21tYW5kLCBlLmcuIGt2b2wgYWNjZXNzIHByb2JsZW0uICAKY3AgLWYga3ZvbF9zdGVw MS5iaW4gJE9VVERJUi9rdm9sX3N0ZXAxXyROT1cuYmluCmVjaG8gInNjcmlwdCAjMSBmaW5pc2hl ZCBzdWNjZXNzZnVsbHkhIgo= # For releases earlier than 5.2.2: NWMxMjY1YzgyNWI0MDZmYzZjYzQ5YTRkOWIzMjRjMjAKdW5saW1pdGVkCm5vdF9yZXN0cmljdGVk ClRoZSBpZCBvZiB0aGUgc2NyaXB0IGlzOjM1MTI4CnRoaXMgc2NyaXB0IGV4cG9ydHMgcmVwb3Np dG9yeSB0byBhIEJpbmFyeSBmaWxlLCBhcyBwYXJ0IG9mIHRoZSByZXBvc2l0b3J5LW1pZ3JhdGlv biBwcm9jZWR1cmUKS2ZpciBXb2xmc29uCiMhIC9iaW4vYmFzaCAtZQojIEJ1ZyAjMzUxMjggLSBz Y3JpcHQgIzEgb2YgMgojIHRoaXMgc2NyaXB0IGV4cG9ydHMgcmVwb3NpdG9yeSB0byBhIEJpbmFy eSBmaWxlLCBhcyBwYXJ0IG9mIHRoZSByZXBvc2l0b3J5LW1pZ3JhdGlvbiBwcm9jZWR1cmUuCiMg c2hvdWxkIGJlIGV4ZWN1dGVkIGFmdGVyIHN3aXRjaGluZyBSUEFzIHRvIHVwZ3JhZGUgbW9kZSBS UEEsIHN1c3BlbmRpbmcgQ0dzIGFuZCBkZXRhY2hpbmcgYWxsIFJQQXMgZnJvbSBjbHVzdGVyLgoj IG5vdGU6IHVzaW5nICItZSIgdG8gZXhpdCBvbiBhbnkgY29tbWFuZCBmYWlsdXJlIApPVVRESVI9 L2hvbWUvY29sbGVjdG9yL2ludGVybmFsX2NvbW1hbmRfb3V0cHV0Ck5PVz0kKGRhdGUgKyIlWS0l bS0lZC0lVCIpCiMgbWFrZSBvdXRwdXQgZGlyIGlmIGRvZXMgbm90IGFscmVhZHkgZXhpc3QKbWtk aXIgLXAgJE9VVERJUgpjZCAvaG9tZS9rb3MKIyBkZWxldGUgcHJldmlvdXMgZXhwb3J0ZWQgYmlu YXJpZXMKXHJtIC1mIGt2b2xfc3RlcDEuYmluCiMgZm9yIGxhdGVyIGRlYnVnZ2luZyB3ZSBzYXZl IGN1cnJlbnQgcmVwb3NpdG9yeSBpbiB0eHQgZm9ybWF0IGluIG91dGRpcgprYXNoeWEvYXJjaGl2 ZS9iaW4vcmVsZWFzZS9jb250cm9sX3V0aWxzIC1kdW1wTmljZUFsbCA+ICRPVVRESVIva3ZvbF9z dGVwMV8kTk9XLnR4dAojIGR1bXAgYmluYXJ5ICh0byAvaG9tZS9rb3MpLiBsb2dzIG9mIGNvbnRy b2xfdXRpbCBhcmUgd3JpdHRlbiB0byBpbnN0YWxsYXRpb25fcHJvY2Vzc2VzX2xvZ3MKa2FzaHlh L2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGlscyAtZHVtcEJpbkFsbCBleHBvcnRUYXJn ZXQ9a3ZvbF9zdGVwMS5iaW4gCiMgd2Ugd2lsbCBvbmx5IHNhdmUgb25lIGJpbmFyeSBhdCBhIHRp bWUsIGJ1dCBmb3IgbGF0ZXIgZGVidWdnaW5nLCBhbHNvIHNhdmUgYSBjb3B5IHdpdGggdGltZXN0 YW1wIGluIG91dGRpcgojICAgaWYgdGhpcyBjb21tYW5kIGZhaWxzIGl0IHVzdWFsbHkgbWVhbnMg dGhhdCBiaW5hcnkgZmlsZSB3YXMgbm90IGNyZWF0ZWQgYnkgdGhlIHByZXZpb3VzIGNvbW1hbmQs IGUuZy4ga3ZvbCBhY2Nlc3MgcHJvYmxlbS4gIApcY3AgLWYga3ZvbF9zdGVwMS5iaW4gJE9VVERJ Ui9rdm9sX3N0ZXAxXyROT1cuYmluCmVjaG8gInNjcmlwdCAjMSBmaW5pc2hlZCBzdWNjZXNzZnVs bHkhIgo= # Enable SSH on one ESXi host in the cluster where the vRPAs reside, log in as user root to the ESXi, and create a repository by running the following command: vmkfstools --createvirtualdisk 6G --diskformat eagerzeroedthick /vmfs/volumes//RPvStorage//RPVS_Lun00001.vmdk NOTE: Ensure that RPVS_Lun00001.vmdk does not exist in any datastore under the RPvStoragte/ folder before running the command. If it is, you can skip this step or delete the file and re-create it using the command above. Format repository on RPA1. From the main menu as boxmgmt user, choose the option: [2] Setup[2] Configure Repository volume[1] Format a volume as a repository volumeThe following security-related questions appear:Select security level for local users [1 Basic or 2 High].Change the default password for predefined user [y or n].High security level enforces password complexity and enforces password change for all users.Select the new volume that you have created beforehand. NOTE: You MUST choose a different volume from the current one. Import binary data to the repository volume on RPA1. From the main menu as boxmgmt user, choose the option: [2] Setup[8] Advanced options[4] Run scriptPaste the following signed script: ZDQ5NjBjOGZiOWViNzZjYzM4Yjc0ZjhmMjU2N2Q3M2EKdW5saW1pdGVkCm5vdF9yZXN0cmljdGVk ClRoZSBpZCBvZiB0aGUgc2NyaXB0IGlzOjM1MTI4CnRoaXMgc2NyaXB0IGltcG9ydHMgcmVwb3Np dG9yeSBmcm9tIGEgQmluYXJ5IGZpbGUsIGFzIHBhcnQgb2YgdGhlIHJlcG9zaXRvcnktbWlncmF0 aW9uIHByb2NlZHVyZQpLZmlyIFdvbGZzb24KIyEgL2Jpbi9iYXNoIC1lCiMgQnVnICMzNTEyOCAt IHNjcmlwdCAjMiBvZiAyCiMgdGhpcyBzY3JpcHQgaW1wb3J0cyByZXBvc2l0b3J5IGZyb20gYSBC aW5hcnkgZmlsZSwgYXMgcGFydCBvZiB0aGUgcmVwb3NpdG9yeS1taWdyYXRpb24gcHJvY2VkdXJl LgojIHNob3VsZCBiZSBleGVjdXRlZCBhZnRlciBydW5uaW5nIHNjcmlwdCAjMSBhbmQgZm9ybWF0 dGluZyB0aGUgbmV3IHJlcG9zaXRvcnkgdm9sdW1lCiMgbm90ZSwgdXNpbmcgIi1lIiB0byBleGl0 IG9uIGFueSBjb21tYW5kIGZhaWx1cmUgCk9VVERJUj0vaG9tZS9jb2xsZWN0b3IvaW50ZXJuYWxf Y29tbWFuZF9vdXRwdXQKTk9XPSQoZGF0ZSArIiVZLSVtLSVkLSVUIikKIyBtYWtlIG91dHB1dCBk aXIgaWYgZG9lcyBub3QgYWxyZWFkeSBleGlzdApta2RpciAtcCAkT1VURElSCmNkIC9ob21lL2tv cwojIGZvciBsYXRlciBkZWJ1Z2dpbmcgd2Ugc2F2ZSBjdXJyZW50IHJlcG9zaXRvcnkgaW4gdHh0 IGZvcm1hdCBpbiBvdXRkaXIKa2FzaHlhL2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGls cyAtZHVtcE5pY2VBbGwgPiAkT1VURElSL2t2b2xfc3RlcDJfJE5PVy50eHQKIyBpbXBvcnQgYmlu YXJ5IChmcm9tIC9ob21lL2tvcykuIGxvZ3Mgb2YgY29udHJvbF91dGlsIGFyZSB3cml0dGVuIHRv IGluc3RhbGxhdGlvbl9wcm9jZXNzZXNfbG9ncwprYXNoeWEvYXJjaGl2ZS9iaW4vcmVsZWFzZS9j b250cm9sX3V0aWxzIGltcG9ydEZyb209a3ZvbF9zdGVwMS5iaW4gCiMgZm9yIGxhdGVyIGRlYnVn Z2luZyB3ZSBzYXZlIGN1cnJlbnQgcmVwb3NpdG9yeSBpbiB0eHQgZm9ybWF0IGluIG91dGRpcgpr YXNoeWEvYXJjaGl2ZS9iaW4vcmVsZWFzZS9jb250cm9sX3V0aWxzIC1kdW1wTmljZUFsbCA+ICRP VVRESVIva3ZvbF9zdGVwM18kTk9XLnR4dAplY2hvICJzY3JpcHQgIzIgZmluaXNoZWQgc3VjY2Vz c2Z1bGx5ISIK # Select repository for all remaining RPAs on the same cluster: [2] Setup[2] Configure repository volume[2] Select an existing repository volume:Select the new repository which was formatted in step 6. Attach RPAs to cluster only on the affected cluster! Do not attach RPAs on other clusters yet. From the main menu as admin user, choose the option: [4] Cluster operations [1] Attach to cluster Enter y to attach RPA. Wait until RPA1 is back online before attaching RPA2.There should not be any conflict. Wait a few minutes for the RPA cluster to come back online. Pause and Unpause a single CG. pause_transfer start_transfer Attach all remaining RPAs on the other clusters to cluster. From the main menu as admin user, choose the option: [4] Cluster operations [1] Attach to cluster Resume all CGs with at least one copy on the affected cluster using the resume_group command (in 5.1.x and earlier, this requires the SE user. In 5.2, use the admin CLI user, may require running enable_advanced_support_commands first.) While logged in as admin user, run CLI Command: finish_maintenance_mode Verify that there is no full sweep occurring (Event ID: 4082) The following section is for RecoverPoint for Virtual Machines 5.2.x and later running with the JAM or JIRAF mechanism. Repository migration with Jiraf Repository Migration (The Repository volume is named IOFilter_KVOL_00001.vmdk. The Journals are named IOFilter_JVOL_0000x.vmdk. If you want to do a migration of all files, ensure to read both the Repository migration and Journal migration sections before starting the activity. The repository name has the Cluster Name in it which can be retrieved by viewing the cluster settings using the Installation menu or by running get_internal_cluster_name): Power off all vRPAs in the cluster: SSH log in to every RPA using user admin and do the following:Go to [5] Shutdown the vRPA by going to Shutdown/Reboot operations -> [2] Shutdown RPAPower off the VM using vCenter Enable SSH on one ESXi host in the cluster where the vRPAs reside, log in as user root to the ESXi, and run steps 3 to 6:Rename the two repository VMDK files according to the following example: mv 58740f7bc4551ab5_IOFilter_KVOL_00001.vmdk 58740f7bc4551ab5_IOFilter_KVOL_00001.vmdk.ignore mv 58740f7bc4551ab5_IOFilter_KVOL_00001-flat.vmdk 58740f7bc4551ab5_IOFilter_KVOL_00001-flat.vmdk.ignore Wait 5 minutes.Manually create the RPvStorage directory on the new datastore if it does not exist already.Manually move and rename the two .vmdk.ignore files to the new datastore under RPvStorage: mv vmfs/volumes/DS_SRC/RPvStorage/58740f7bc4551ab5_IOFilter_KVOL_00001.vmdk.ignore /vmfs/volumes/DS_TGT/RPvStorage/58740f7bc4551ab5_IOFilter_KVOL_00001.vmdk mv /vmfs/volumes/DS_SRC/RPvStorage/58740f7bc4551ab5_IOFilter_KVOL_00001-flat.vmdk.ignore /vmfs/volumes/DS_TGT/RPvStorage/58740f7bc4551ab5_IOFilter_KVOL_00001-flat.vmdk Wait 5 minutes.Power on all vRPAs.Verify that the cluster is up and has access to the repository by running the admin CLI command: get_rpa_states If you have datastores that are registered for use by RecoverPoint for Virtual Machines, remove the ones from the old array and add the new ones. Journal Migration with Jiraf: Use the same procedure used for repository migration to migrate the journals, using journal .vmdk files instead of repository. You can see the journal names from the web client plug-in, under Protection -> Consistency Groups then select copy, and the journal name shows on the right. NOTE: The RecoverPoint plug-in does not reflect the new datastore of migrated journals under the Protection Policy. Repository recreation with Jiraf Re-creating the Repository when the Repository is corrupted or has been deleted. (The repository file name must be created with the Cluster Name or Cluster UID in it which can be retrieved by viewing the cluster settings using the Installation menu or by running get_internal_cluster_name): Log in to RPA1 as admin user and run command: start_maintenance_mode Select: 9) migrate_repository Output should read "Switched to maintenance mode successfully." Suspend all CGs with at least one copy on the affected cluster using the suspend_group command using the admin CLI. (In some versions, it may require running enable_advanced_support_commands first)PuTTY or SSH into ALL RPAs IN ALL CLUSTERS as admin user and detach ALL RPAs IN ALL CLUSTERS. From the main menu as admin user, choose [4] Cluster operations [1] Detach from cluster Do you want to detach the RPA from the cluster? NOTE: RPA is rebooted when reattached. (y or n)? Y On the problematic cluster, Export binary data on RPA1. From the main menu as admin user, choose the option: [2] Setup [8] Advanced options [4] Run script Paste the following signed script: For RPVM 5.2.2 or later: ZjcxN2NmYWIwYzk5MDUwNjFlNjQ1ZmI5ODM1Y2I1NzUKdW5saW1pdGVkCm5vdF9yZXN0cmljdGVk ClRoZSBpZCBvZiB0aGUgc2NyaXB0IGlzOjM1MTI4ClRoaXMgc2NyaXB0IGV4cG9ydHMgcmVwb3Np dG9yeSB0byBhIEJpbmFyeSBmaWxlLCBhcyBwYXJ0IG9mIHRoZSByZXBvc2l0b3J5LW1pZ3JhdGlv biBwcm9jZWR1cmUKS2ZpciBXb2xmc29uIGFuZCBJZGFuIEtlbnRvcgojISAvYmluL2Jhc2ggLWUK IyBCdWcgIzM1MTI4IC0gc2NyaXB0ICMxIG9mIDIKIyB0aGlzIHNjcmlwdCBleHBvcnRzIHJlcG9z aXRvcnkgdG8gYSBCaW5hcnkgZmlsZSwgYXMgcGFydCBvZiB0aGUgcmVwb3NpdG9yeS1taWdyYXRp b24gcHJvY2VkdXJlLgojIHNob3VsZCBiZSBleGVjdXRlZCBhZnRlciBzd2l0Y2hpbmcgUlBBcyB0 byB1cGdyYWRlIG1vZGUgUlBBLCBzdXNwZW5kaW5nIENHcyBhbmQgZGV0YWNoaW5nIGFsbCBSUEFz IGZyb20gY2x1c3Rlci4KIyBub3RlOiB1c2luZyAiLWUiIHRvIGV4aXQgb24gYW55IGNvbW1hbmQg ZmFpbHVyZSAKT1VURElSPS9ob21lL2NvbGxlY3Rvci9pbnRlcm5hbF9jb21tYW5kX291dHB1dApO T1c9JChkYXRlICsiJVktJW0tJWQtJVQiKQojIG1ha2Ugb3V0cHV0IGRpciBpZiBkb2VzIG5vdCBh bHJlYWR5IGV4aXN0Cm1rZGlyIC1wICRPVVRESVIKY2QgL2hvbWUva29zCiMgZGVsZXRlIHByZXZp b3VzIGV4cG9ydGVkIGJpbmFyaWVzCnJtIC1mIGt2b2xfc3RlcDEuYmluCiMgZm9yIDUuMi4yLCBj b3B5IGNvbnRyb2xfdXRpbHMgZnJvbSAvdXNyL3NiaW4KbWtkaXIgLXAgL2hvbWUva29zL2thc2h5 YS9hcmNoaXZlL2Jpbi9yZWxlYXNlLwpjcCAvdXNyL3NiaW4vY29udHJvbF91dGlscyAvaG9tZS9r b3Mva2FzaHlhL2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGlscwojIGZvciBsYXRlciBk ZWJ1Z2dpbmcgd2Ugc2F2ZSBjdXJyZW50IHJlcG9zaXRvcnkgaW4gdHh0IGZvcm1hdCBpbiBvdXRk aXIKa2FzaHlhL2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGlscyAtZHVtcE5pY2VBbGwg PiAkT1VURElSL2t2b2xfc3RlcDFfJE5PVy50eHQKIyBkdW1wIGJpbmFyeSAodG8gL2hvbWUva29z KS4gY29udHJvbF91dGlsIGxvZ3MgYXJlIHdyaXR0ZW4gdG8gaW5zdGFsbGF0aW9uX3Byb2Nlc3Nl c19sb2dzCmthc2h5YS9hcmNoaXZlL2Jpbi9yZWxlYXNlL2NvbnRyb2xfdXRpbHMgLWR1bXBCaW5B bGwgZXhwb3J0VGFyZ2V0PWt2b2xfc3RlcDEuYmluIAojIHdlIHdpbGwgb25seSBzYXZlIG9uZSBi aW5hcnkgYXQgYSB0aW1lLCBidXQgZm9yIGxhdGVyIGRlYnVnZ2luZywgYWxzbyBzYXZlIGEgY29w eSB3aXRoIHRpbWVzdGFtcCBpbiBvdXRkaXIKIyAgIGlmIHRoaXMgY29tbWFuZCBmYWlscyBpdCB1 c3VhbGx5IG1lYW5zIHRoYXQgYmluYXJ5IGZpbGUgd2FzIG5vdCBjcmVhdGVkIGJ5IHRoZSBwcmV2 aW91cyBjb21tYW5kLCBlLmcuIGt2b2wgYWNjZXNzIHByb2JsZW0uICAKY3AgLWYga3ZvbF9zdGVw MS5iaW4gJE9VVERJUi9rdm9sX3N0ZXAxXyROT1cuYmluCmVjaG8gInNjcmlwdCAjMSBmaW5pc2hl ZCBzdWNjZXNzZnVsbHkhIgo= # For releases earlier than 5.2.2: NWMxMjY1YzgyNWI0MDZmYzZjYzQ5YTRkOWIzMjRjMjAKdW5saW1pdGVkCm5vdF9yZXN0cmljdGVk ClRoZSBpZCBvZiB0aGUgc2NyaXB0IGlzOjM1MTI4CnRoaXMgc2NyaXB0IGV4cG9ydHMgcmVwb3Np dG9yeSB0byBhIEJpbmFyeSBmaWxlLCBhcyBwYXJ0IG9mIHRoZSByZXBvc2l0b3J5LW1pZ3JhdGlv biBwcm9jZWR1cmUKS2ZpciBXb2xmc29uCiMhIC9iaW4vYmFzaCAtZQojIEJ1ZyAjMzUxMjggLSBz Y3JpcHQgIzEgb2YgMgojIHRoaXMgc2NyaXB0IGV4cG9ydHMgcmVwb3NpdG9yeSB0byBhIEJpbmFy eSBmaWxlLCBhcyBwYXJ0IG9mIHRoZSByZXBvc2l0b3J5LW1pZ3JhdGlvbiBwcm9jZWR1cmUuCiMg c2hvdWxkIGJlIGV4ZWN1dGVkIGFmdGVyIHN3aXRjaGluZyBSUEFzIHRvIHVwZ3JhZGUgbW9kZSBS UEEsIHN1c3BlbmRpbmcgQ0dzIGFuZCBkZXRhY2hpbmcgYWxsIFJQQXMgZnJvbSBjbHVzdGVyLgoj IG5vdGU6IHVzaW5nICItZSIgdG8gZXhpdCBvbiBhbnkgY29tbWFuZCBmYWlsdXJlIApPVVRESVI9 L2hvbWUvY29sbGVjdG9yL2ludGVybmFsX2NvbW1hbmRfb3V0cHV0Ck5PVz0kKGRhdGUgKyIlWS0l bS0lZC0lVCIpCiMgbWFrZSBvdXRwdXQgZGlyIGlmIGRvZXMgbm90IGFscmVhZHkgZXhpc3QKbWtk aXIgLXAgJE9VVERJUgpjZCAvaG9tZS9rb3MKIyBkZWxldGUgcHJldmlvdXMgZXhwb3J0ZWQgYmlu YXJpZXMKXHJtIC1mIGt2b2xfc3RlcDEuYmluCiMgZm9yIGxhdGVyIGRlYnVnZ2luZyB3ZSBzYXZl IGN1cnJlbnQgcmVwb3NpdG9yeSBpbiB0eHQgZm9ybWF0IGluIG91dGRpcgprYXNoeWEvYXJjaGl2 ZS9iaW4vcmVsZWFzZS9jb250cm9sX3V0aWxzIC1kdW1wTmljZUFsbCA+ICRPVVRESVIva3ZvbF9z dGVwMV8kTk9XLnR4dAojIGR1bXAgYmluYXJ5ICh0byAvaG9tZS9rb3MpLiBsb2dzIG9mIGNvbnRy b2xfdXRpbCBhcmUgd3JpdHRlbiB0byBpbnN0YWxsYXRpb25fcHJvY2Vzc2VzX2xvZ3MKa2FzaHlh L2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGlscyAtZHVtcEJpbkFsbCBleHBvcnRUYXJn ZXQ9a3ZvbF9zdGVwMS5iaW4gCiMgd2Ugd2lsbCBvbmx5IHNhdmUgb25lIGJpbmFyeSBhdCBhIHRp bWUsIGJ1dCBmb3IgbGF0ZXIgZGVidWdnaW5nLCBhbHNvIHNhdmUgYSBjb3B5IHdpdGggdGltZXN0 YW1wIGluIG91dGRpcgojICAgaWYgdGhpcyBjb21tYW5kIGZhaWxzIGl0IHVzdWFsbHkgbWVhbnMg dGhhdCBiaW5hcnkgZmlsZSB3YXMgbm90IGNyZWF0ZWQgYnkgdGhlIHByZXZpb3VzIGNvbW1hbmQs IGUuZy4ga3ZvbCBhY2Nlc3MgcHJvYmxlbS4gIApcY3AgLWYga3ZvbF9zdGVwMS5iaW4gJE9VVERJ Ui9rdm9sX3N0ZXAxXyROT1cuYmluCmVjaG8gInNjcmlwdCAjMSBmaW5pc2hlZCBzdWNjZXNzZnVs bHkhIgo= # A. If you have regular datastores: Enable SSH on one ESXi host in the cluster where the vRPAs reside, log in as user root to the ESXi, and create a repository VMDK. The VMDK name is made out of the RecoverPoint for Virtual Machines cluster UID which can be obtained from admin -> Setup -> View Settings -> View cluster XXXXX settings -> "Internal cluster name." vmkfstools --createvirtualdisk 6G --diskformat eagerzeroedthick /vmfs/volumes//RPvStorage/_IOFilter_KVOL_00001.vmdk Example for cluster UID 0x1234567890123456: vmkfstools --createvirtualdisk 6G --diskformat eagerzeroedthick /vmfs/volumes//RPvStorage/1234567890123456_IOFilter_KVOL_00001.vmdk If the Cluster UID is 15 characters long, add a 0 before the cluster UID. Example for cluster UID 0x123456789012345: vmkfstools --createvirtualdisk 6G --diskformat eagerzeroedthick /vmfs/volumes//RPvStorage/0123456789012345_IOFilter_KVOL_00001.vmdk Edit the repository VMDK: Add the following lines to the VMDK (for example 482d235467eec829_IOFilter_KVOL_00001.vmdk): ddb.deletable = "true" ddb.iofilters = "emcjiraf" Edit the version attribute and change it to version = 5.Edit the ddb.virtualHWVersion to ddb.virtualHWVersion = 13. Example of such a VMDK descriptor file: # Disk DescriptorFile version=5 encoding="UTF-8" CID=fffffffe parentCID=ffffffff createType="vmfs" # Extent description RW 12001280 VMFS "482d235467eec829_IOFilter_KVOL_00001-flat.vmdk" #The Disk Data Base #DDB ddb.deletable = "true" ddb.adapterType = "lsilogic" ddb.geometry.cylinders = "747" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.iofilters = "emcjiraf" ddb.longContentID = "4ebc10108fee3967cfd8711dfffffffe" ddb.uuid = "60 00 C2 9e 7c 7b ba a0-1f 03 42 25 5a a2 76 84" ddb.virtualHWVersion = "13" B. If you have vSAN: Log in to any of the ESXs registered to that RecoverPoint system and run the following command: esxcli vsan debug object list --all > obj.txt This creates a file in the same directory called obj.txt. Open the file using the command: less obj.txt Search for "_IOFilter_KVOL_00001.vmdk." You should find the path for the object UUID listed as missing. Note the UUID of the missing object then run the following command to delete it: /usr/lib/vmware/osfs/bin/objtool delete -u OBJ-UUID -f to clear stale object if it exists.Create a VMDK repository. The VMDK name is made out of the RecoverPoint for Virtual Machines cluster UID which can be obtained from admin -> Setup -> View Settings -> View cluster XXXXX settings -> "Internal cluster name." vmkfstools --createvirtualdisk 6G --diskformat eagerzeroedthick /vmfs/volumes//RPvStorage/_IOFilter_KVOL_00001.vmdk Example for cluster UID 0x1234567890123456: vmkfstools --createvirtualdisk 6G --diskformat eagerzeroedthick /vmfs/volumes//RPvStorage/1234567890123456_IOFilter_KVOL_00001.vmdk If the Cluster UID is 15 characters long, add a 0 before the cluster UID. Example for cluster UID 0x123456789012345: vmkfstools --createvirtualdisk 6G --diskformat eagerzeroedthick /vmfs/volumes//RPvStorage/0123456789012345_IOFilter_KVOL_00001.vmdk Edit the repository VMDK: Add the following line to the VMDK (for example, 482d235467eec829_IOFilter_KVOL_00001.vmdk): ddb.iofilters = "emcjiraf"Example of such a VMDK descriptor file: # Disk DescriptorFile version=5 encoding="UTF-8" CID=fffffffe parentCID=ffffffff createType="vmfs" # Extent description RW 12001280 VMFS "vsan://xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # The Disk Data Base #DDB ddb.adapterType = "lsilogic" ddb.geometry.cylinders = "747" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.iofilters = "emcjiraf" ddb.longContentID = "4c235a2c8ef0c850a34352a6fffffffe" ddb.uuid = "60 00 C2 9f d5 dd bb 3b-4c 4d 18 b5 65 2b 93 a1" ddb.virtualHWVersion = "14" Create a flat file for the VMDK: dd if=/dev/zero of=/vmfs/volumes//RPvStorage/_IOFilter_KVOL_00001-flat.vmdk count=6144 bs=1000000 Example for cluster UID 0x1234567890123456: dd if=/dev/zero of=/vmfs/volumes//RPvStorage/1234567890123456_IOFilter_KVOL_00001-flat.vmdk count=6144 bs=1000000 If the Cluster UID is 15 characters long, add a 0 before the cluster UID. Example for cluster UID 0x123456789012345: dd if=/dev/zero of=/vmfs/volumes//RPvStorage/0123456789012345_IOFilter_KVOL_00001-flat.vmdk count=6144 bs=1000000 Format repository on RPA1. Log in using user admin and from the main menu, choose option: [2] Setup[2] Configure Repository volume[1] Format a volume as a repository volumeThe following security-related questions appear:Select security level for local users [1 Basic or 2 High].Change default password for predefined user [y or n].High security level enforces password complexity and enforces password change for all users. Select the new volume that you have created beforehand. NOTE: You MUST choose a different volume from the current volume. Confirm the following question: Warning: Arrays that are registered in this cluster are lost after formatting a repository volume in this cluster. The system pauses the Consistency Groups on the array. You must re-register any arrays that were lost. Do you want to continue (y or n)? Y The following output should be shown:Configure repository volume completed successfully. Import binary data to the repository volume on RPA1. From the main menu as admin user, choose option: [2] Setup[8] Advanced options[4] Run scriptPaste the following signed script: ZDQ5NjBjOGZiOWViNzZjYzM4Yjc0ZjhmMjU2N2Q3M2EKdW5saW1pdGVkCm5vdF9yZXN0cmljdGVk ClRoZSBpZCBvZiB0aGUgc2NyaXB0IGlzOjM1MTI4CnRoaXMgc2NyaXB0IGltcG9ydHMgcmVwb3Np dG9yeSBmcm9tIGEgQmluYXJ5IGZpbGUsIGFzIHBhcnQgb2YgdGhlIHJlcG9zaXRvcnktbWlncmF0 aW9uIHByb2NlZHVyZQpLZmlyIFdvbGZzb24KIyEgL2Jpbi9iYXNoIC1lCiMgQnVnICMzNTEyOCAt IHNjcmlwdCAjMiBvZiAyCiMgdGhpcyBzY3JpcHQgaW1wb3J0cyByZXBvc2l0b3J5IGZyb20gYSBC aW5hcnkgZmlsZSwgYXMgcGFydCBvZiB0aGUgcmVwb3NpdG9yeS1taWdyYXRpb24gcHJvY2VkdXJl LgojIHNob3VsZCBiZSBleGVjdXRlZCBhZnRlciBydW5uaW5nIHNjcmlwdCAjMSBhbmQgZm9ybWF0 dGluZyB0aGUgbmV3IHJlcG9zaXRvcnkgdm9sdW1lCiMgbm90ZSwgdXNpbmcgIi1lIiB0byBleGl0 IG9uIGFueSBjb21tYW5kIGZhaWx1cmUgCk9VVERJUj0vaG9tZS9jb2xsZWN0b3IvaW50ZXJuYWxf Y29tbWFuZF9vdXRwdXQKTk9XPSQoZGF0ZSArIiVZLSVtLSVkLSVUIikKIyBtYWtlIG91dHB1dCBk aXIgaWYgZG9lcyBub3QgYWxyZWFkeSBleGlzdApta2RpciAtcCAkT1VURElSCmNkIC9ob21lL2tv cwojIGZvciBsYXRlciBkZWJ1Z2dpbmcgd2Ugc2F2ZSBjdXJyZW50IHJlcG9zaXRvcnkgaW4gdHh0 IGZvcm1hdCBpbiBvdXRkaXIKa2FzaHlhL2FyY2hpdmUvYmluL3JlbGVhc2UvY29udHJvbF91dGls cyAtZHVtcE5pY2VBbGwgPiAkT1VURElSL2t2b2xfc3RlcDJfJE5PVy50eHQKIyBpbXBvcnQgYmlu YXJ5IChmcm9tIC9ob21lL2tvcykuIGxvZ3Mgb2YgY29udHJvbF91dGlsIGFyZSB3cml0dGVuIHRv IGluc3RhbGxhdGlvbl9wcm9jZXNzZXNfbG9ncwprYXNoeWEvYXJjaGl2ZS9iaW4vcmVsZWFzZS9j b250cm9sX3V0aWxzIGltcG9ydEZyb209a3ZvbF9zdGVwMS5iaW4gCiMgZm9yIGxhdGVyIGRlYnVn Z2luZyB3ZSBzYXZlIGN1cnJlbnQgcmVwb3NpdG9yeSBpbiB0eHQgZm9ybWF0IGluIG91dGRpcgpr YXNoeWEvYXJjaGl2ZS9iaW4vcmVsZWFzZS9jb250cm9sX3V0aWxzIC1kdW1wTmljZUFsbCA+ICRP VVRESVIva3ZvbF9zdGVwM18kTk9XLnR4dAplY2hvICJzY3JpcHQgIzIgZmluaXNoZWQgc3VjY2Vz c2Z1bGx5ISIK # Select repository for the remaining RPAs: [2] Setup[2] Configure repository volume[2] Select an existing repository volume:Select the new repository which was formatted in step 8. Attach RPAs to cluster only on the affected cluster! Do not attach RPAs on other clusters yet. From the main menu as the admin user choose the option: [4] Cluster operations[1] Attach to cluster Enter y to attach RPA. Wait until RPA1 is back online before attaching RPA2.There should not be any conflict. Wait a few minutes for the RPA cluster to come back online. Pause and Unpause a single CG using the following commands: pause_transfer start_transfer Attach all remaining RPAs on the other clusters to their respective cluster. From the main menu as admin user (or boxmgmt for earlier versions), choose the option: [4] Cluster operations[1] Attach to cluster Resume all CGs with at least one copy on the affected cluster using the resume_group command in admin CLI. (In some versions it might require running enable_advanced_support_commands first)While logged in as admin user, Run the CLI Command: finish_maintenance_mode