Loading...
Loading...
Trying to run the create_mdm_cluster command gives the error "Error: MDM failed command. Status: The command that is used for Tie-Breaker is not allowed." scli --create_mdm_cluster --master_mdm_ip 192.168.X.X --master_mdm_management_ip XX.XX.XX.XX --master_mdm_name mdm04 --accept_license --use_nonsecure_communication Error: MDM failed command. Status: The command used for Tie-Breaker is not allowed.
The MDM RPM packages were installed using the command: rpm -i <name of the rpm package>. However, in PowerFlex 2.0, the MDM RPMs must be installed differently. IF you do not specify MDM_ROLE_IS_MANAGER switch and its corresponding value of 0 or 1, it is taken as TB, by default.
Installation in a five Node configuration (one Master MDM, two Slave MDMs, two TBs) or in a 3 Node Configuration (one Master MDM, one Slave MDM, and 1 TB): Install the MDM package on the server. During the installation, determine if the server will be a Manager or a TieBreaker (default). Install the MDM and configure the Manager role, by running the following command on the Master (MDM 1) and Slave MDMs (MDM 2 and MDM 3): MDM_ROLE_IS_MANAGER=1 rpm -i <mdm_path.rpm> ------------------------- Note: The default MDM credentials are: user name: admin password: admin --------------------- Install the MDM on the TieBreaker MDMs (TB 1 and TB 2): MDM_ROLE_IS_MANAGER=0 rpm -i <mdm_path.rpm> Note: If you do not specify MDM_ROLE_IS_MANAGER switch, it is taken as TB by default. Run the --create_mdm_cluster command: scli --create_mdm_cluster --master_mdm_ip <IP of MDM 1> --master_mdm_management_ip <Management IP of MDM 1> --master_mdm_name <Name to give MDM 1> --accept_license --use_nonsecure-communication
Click on a version to see all relevant bugs
Dell Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.