Conductor Documentation

Installing a Fully Distributed (9 Nodes) Cluster with Cluster Manager

Fully Distributed Cluster (9 nodes) Installation Using Cluster Manager

Use the Cluster Manager package to automate the process of installing a nine nodes cluster with certificates generated automatically by the application, using node-0 as the Cluster Manager. A fully distributed cluster consists of 9 nodes, comprised of:

This process can be performed on 10 virtual machines running CentOS with:

To manually install a fully distributed cluster without using Cluster Manager, see Installing a Fully Distributed Cluster

Fully Distributed Cluster Network Architecture

Fully Distributed Cluster

Installation Overview

Setting up a fully distributed cluster involves the following steps:

  1. Update the VMs to meet the basic prerequisites.
  2. Upload the license file to each node.
  3. Open the required ports.
  4. Generate the configuration file and run the Cluster Manager.

Prerequisites

Review the following prerequisites to make sure your system supports this configuration. For general guidelines, see Sizing Guidelines.

Note: If an internet connection is not available, an alternate method will need to be used to update the base image packages.

Configuration requirements

The following configuration settings should be available prior to installation:

Sizing Guidelines

Node Type vCPUs RAM Storage
Database 2 16GB 64GB
Broker 2 4GB 32GB
Manager 4 8GB 32GB

Preparing for Installation

The following steps are required prior to running the AIO installation:

  1. Obtain the Cluster Manager RPMs.
  2. Prepare the VMs.
  3. Upload the license file to node on which the “cfy_cluster_manager” command will be executed (usually the first node in the cluster).
  4. Install the required Python packages.
  5. Open the required ports.
  6. Identify and record IP addresses and host names.
  7. Generate cluster certificates.

Obtain the Cluster Manager RPM and the Conductor Manager Installation RPM

The RPM file contains all the components and dependencies required to run the installation process and is available on Wind River Delivers, Wind River’s software portal. For detailed instructions on accessing Wind River Delivers and downloading the file, see the Wind River Installation and Licensing Guide.

Prepare the VMs

  1. Add public DNS nameserver to /etc/resolv.conf to all VMs.

    echo "nameserver 8.8.8.8" | sudo tee -a /etc/resolv.conf
  2. Add the user cfyuser to the list of sudoers on all VMs.

    echo "cfyuser ALL=(ALL) NOPASSWD:ALL" |  sudo tee /etc/sudoers.d/cfyuser
  3. If required, update the operating system and, after downloading, log in as root and update the base image packages using the following command:

    yum update -y
    reboot

Note: The recommended operating system for a nine nodes cluster is CentOS 7.9.

Install Required Packages

Additional Python packages are required to support the Manager. As root, enter the following:

sudo yum install wget unzip rsync python-setuptools python-backports python-backports-ssl_match_hostname firewalld -y

Uploading the License File to each Nodes

Copy the license file you received from Wind River to each of the nodes and document the path. You will need to enter this path when you update the config.yaml file.

Open TCP Ports and Activate Firewalld for Network Access

For proper network communication, open the posts listed below on all three nodes.

sudo systemctl enable firewalld
sudo systemctl start firewalld
sudo firewall-cmd --permanent --add-port=22/tcp
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --permanent --add-port=443/tcp
sudo firewall-cmd --permanent --add-port=2379/tcp
sudo firewall-cmd --permanent --add-port=2380/tcp
sudo firewall-cmd --permanent --add-port=5432/tcp
sudo firewall-cmd --permanent --add-port=8008/tcp
sudo firewall-cmd --permanent --add-port=8009/tcp
sudo firewall-cmd --permanent --add-port=4369/tcp
sudo firewall-cmd --permanent --add-port=5672/tcp
sudo firewall-cmd --permanent --add-port=25672/tcp
sudp firewall-cmd --permanent --add-port=35672/tcp
sudo firewall-cmd --permanent --add-port=15672/tcp
sudo firewall-cmd --permanent --add-port=61613/tcp
sudo firewall-cmd --permanent --add-port=1883/tcp
sudo firewall-cmd --permanent --add-port=15674/tcp
sudo firewall-cmd --permanent --add-port=15675/tcp
sudo firewall-cmd --permanent --add-port=15692/tcp
sudo firewall-cmd --permanent --add-port=5671/tcp
sudo firewall-cmd --permanent --add-port=22000/tcp
sudo firewall-cmd --permanent --add-port=53333/tcp
sudo firewall-cmd --permanent --add-port=25671/tcp
sudo firewall-cmd --permanent --add-port=15671/tcp
sudo firewall-cmd --reload
sudo firewall-cmd --list-ports

Installing Cluster Manager

  1. On the node acting as Cluster Manager (VM 1), install the Cluster Manager RPM by entering:

    sudo yum install -y $HOME/cloudify-cluster-manager-22.11-ga.el7.x86_64.rpm
    sudo yum install -y epel-release
    sudo yum install -y haveged
    sudo systemctl start haveged
  2. On the node acting as Cluster Manager (VM 1), generate the cluster configuration file.

    cfy_cluster_manager generate-config --nine-nodes
  3. Use a text editor to enter your specific network parameters. Update the fields shown below by replacing the values marked in <> with values for your network.

See Filling in the configuration file for instruction on updating the file.

# The VMs' SSH username,
ssh_user: '<username>'

# The user's password for SSH connection. This cannot be used with ssh_key_path
ssh_password: '<secure-password-like-string>'

# Your private SSH key local path used to connect to all VMs
ssh_key_path: ''

# Local path to a valid license
cloudify_license_path: '<license file path>'

# Manager RPM to install on the cluster instances
# Example:  cloudify-manager-install-22.11-ga.el7.x86_64.rpm
manager_rpm_path: '<manager rpm file path>'

# This section is only relevant if using LDAP
ldap:
  # This should include the protocol and port,
  # e.g. ldap://192.0.2.1:389 or ldaps://192.0.2.45:636
  server: ''

  # The domain, e.g. example.local
  domain: ''

  # True if Active Directory will be used as the LDAP authenticator
  is_active_directory: true

  # This must be provided if the server is using ldaps://
  ca_cert: ''

  # Username and password should only be entered if absolutely required
  # by the ldap service.
  username: '<username>'
  password: '<secure-password-like-string>'

  # Any extra LDAP information (separated by the `;` sign. e.g. a=1;b=2)
  dn_extra: ''


# If specified, all the VMs' certificates will need to be specified as well
ca_cert_path: '<certificate_path>'

# If using a load-balancer, please provide its IP.
# This IP will be written to the manager config.yaml files under
# networks[load_balancer].
# Remark: The load balancer is not installed during the cluster installation.
load_balancer_ip: ''


existing_vms:
    manager-1:
      private_ip: '<private-ip manager-1>'
      public_ip: '<public-ip manager-1>'  # If not specified, will default to the private-ip
      hostname: '<manager-1-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Needs to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Needs to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''

    manager-2:
      private_ip: '<private-ip manager-2>'
      public_ip: '<public-ip manager-2>'  # If not specified, will default to the private-ip
      hostname: '<manager-2-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Needs to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Needs to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''

    manager-3:
      private_ip: '<private-ip manager-3>'
      public_ip: '<public-ip manager-3>'  # If not specified, will default to the private-ip
      hostname: '<manager-3-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Need to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Need to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''

    postgresql-1:
      private_ip: '<private-ip postgresql-1>'
      public_ip: '<public-ip postgresql-1>'  # If not specified, will default to the private-ip
      hostname: '<postgresql-1-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Needs to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Needs to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''

    postgresql-2:
      private_ip: '<private-ip postgresql-2>'
      public_ip: '<public-ip postgresql-2>'  # If not specified, will default to the private-ip
      hostname: '<postgresql-2-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Needs to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Needs to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''

    postgresql-3:
      private_ip: '<private-ip postgresql-3>'
      public_ip: '<public-ip postgresql-3>'  # If not specified, will default to the private-ip
      hostname: '<postgresql-3-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Need to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Need to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''
		
    rabbitmq-1:
      private_ip: '<private-ip rabbitmq-1>'
      public_ip: '<public-ip rabbitmq-1>'  # If not specified, will default to the private-ip
      hostname: '<rabbitmq-1-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Needs to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Needs to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''

    rabbitmq-2:
      private_ip: '<private-ip rabbitmq-2>'
      public_ip: '<public-ip rabbitmq-2>'  # If not specified, will default to the private-ip
      hostname: '<rabbitmq-2-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Needs to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Needs to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''

    rabbitmq-3:
      private_ip: '<private-ip rabbitmq-3>'
      public_ip: '<public-ip rabbitmq-3>'  # If not specified, will default to the private-ip
      hostname: '<rabbitmq-3-host-name>'   # Optional. As specified in the certificate (if specified)
      cert_path: '<certificate_path>'  # Need to be supplied if ca_cert_path was supplied
      key_path: '<key_path>'  # Need to be supplied if ca_cert_path was supplied
      # Optional. In case you wish to use your own config.yaml files.
      config_path:
        manager_config_path: ''
        postgresql_config_path: ''
        rabbitmq_config_path: ''		

# If the credentials are not specified, random self-generated ones will be used and written to    /home/centos/secret_credentials.yaml
credentials:
  manager:
    admin_username: '<username>'
    admin_password: '<secure-password-like-string>'

  postgresql:
    postgres_password: '<secure-password-like-string>'
    cluster:
      etcd:
        cluster_token: '<cluster token>'
        root_password: '<secure-password-like-string>*'
        patroni_password: '<secure-password-like-string>'
      patroni:
        rest_password: '<secure-password-like-string>'
      postgres:
        replicator_password: '<secure-password-like-string>'

  rabbitmq:
    username: '<username>'
    password: '<secure-password-like-string>'
    erlang_cookie: '<cookiename>'

  prometheus:
    username: '<username>'
    password: '<secure-password-like-string>*'
  1. Validate the configuration file using the cluster CLI command:

    sudo cfy_cluster_manager install --validate --config-path cfy_cluster_config.yaml

Example output

[CFY-CLUSTER-MANAGER] - DEBUG - Running: ['command', '-v', 'yum']
[CFY-CLUSTER-MANAGER] - INFO - Validating the configuration file
[CFY-CLUSTER-MANAGER] - INFO - Validating manager-1
[CFY-CLUSTER-MANAGER] - INFO - Validating manager-2
[CFY-CLUSTER-MANAGER] - INFO - Validating manager-3
[CFY-CLUSTER-MANAGER] - INFO - Validating postgresql-1
[CFY-CLUSTER-MANAGER] - INFO - Validating postgresql-2
[CFY-CLUSTER-MANAGER] - INFO - Validating postgresql-3
[CFY-CLUSTER-MANAGER] - INFO - Validating rabbitmq-1
[CFY-CLUSTER-MANAGER] - INFO - Validating rabbitmq-2
[CFY-CLUSTER-MANAGER] - INFO - Validating rabbitmq-3
[CFY-CLUSTER-MANAGER] - INFO - The configuration file at cfy_cluster_config.yaml was validated successfully.  
  1. Run the cluster manager install file:

    cfy_cluster_manager install --config-path cfy_cluster_config.yaml

Filling in the configuration file

General Note

Fill in the information according to the comments in the file itself. NOTE! Do not delete anything from the file.

Load-balancer

As mentioned before, a load-balancer is not installed as part of the cluster installation. The load_balancer_ip value is used in the different config.yaml files for the instances’ connection.

Certificates

config.yaml files

Credentials

Post Installation

Once the database, broker, and manager are installed, perform the following on the manager node.

cfy cluster db-nodes list
cfy cluster brokers list
cfy cluster managers list
cfy cluster status