Installing a Fully Distributed (9 Nodes) Cluster Manually
Fully Distributed Cluster (9 nodes) Manual Installation
A fully distributed cluster consists of 9 nodes, comprised of:
- 3 nodes for the database, providing a high-availability PostgreSQL cluster based on Patroni.
- 3 nodes for the broker, providing a high-availability RabbitMQ cluster based on the RabbitMQ best practices.
- 3 nodes for Conductor management, providing the Conductor workers framework, the REST API, the User Interface infrastructure and other backend services. The Conductor Management service is a cluster of at least two Manager nodes running in an active/active mode.
These instructions explain how to install a nine nodes cluster without using Cluster Manager. To use Cluster Manager to automate the installation process, see Installing a Nine Nodes (Fully Distributed) Cluster with Cluster Manager.
If air-gapped operation is required, see Manual Installation Requirements for Air-Gapped Operation.
Fully Distributed Cluster Network Architecture
Installation Overview
Setting up a fully distributed cluster involves the following steps:
- Verifying that your environment meets the basic prerequisites.
- Generating the certificates.
- Running the Install program for the database, broker, and manager.
- Completing Day 2 requirements.
Prerequisites
Review the following prerequisites to make sure your system supports this configuration. For general guidelines, see Sizing Guidelines.
Operating System
The recommended operating system for a fully distributed cluster is CentOS 7.9.
Update to this system if required and, after downloading, log in as root and update the base image packages using the following command:
yum update -y
reboot
Note: If an internet connection is not available, an alternate method will need to be used to update the base image packages.
Configuration requirements
The following configuration settings should be available prior to installation:
- Public and private IP settings
- A configurable host name
- Administrator privileges (e.g. sudo permissions)
- All nodes should be on the same network and, if there is a firewall or security group, used ports are open and not blocking relevant services.
Sizing Guidelines
Node Type | vCPUs | RAM | Storage |
---|---|---|---|
Database | 2 | 16GB | 64GB |
Broker | 2 | 4GB | 32GB |
Manager | 4 | 8GB | 32GB |
Preparing for Installation
The following steps are required prior to running the AIO installation:
- Install the manager RPM file on your system.
- Upload the license file to all the nodes in the cluster.
- Install the required Python packages.
- Generate cluster certificates.
Installing the Manager RPM
The RPM file contains all the components and dependencies required to run the installation process and is available on Wind River Delivers, Wind River’s software portal. For detailed instructions on accessing Wind River Delivers and downloading the file, see the Wind River Installation and Licensing Guide.
To install the Manger RPM, log in as root and enter:
yum install -y $HOME/cloudify-manager-install-22.11-ga.el7.x86_64.rpm
Uploading the License File to each Nodes
Copy the license file you received from Wind River to each of the nodes and document the path. You will need to enter this path when you update the config.yaml file.
Installing Required Packages
Additional Python packages are required to support the Manager. As root, enter the following:
yum install -y unzip rsync python-setuptools python-backports python-backports-ssl_match_hostname
Generating Certificates
To allow communication across the cluster, certifications need to generated and copied to each host in the cluster. For additional information about certificates see Certificates Overview.
To generate test certificates using cty_manager’s built in commands, perform the following:
On the same host in the cluster, enter:
cfy_manager generate-test-cert -s <manager-1-fqdn>,<manager-1-private-ip>,<manager-1-public-ip> cfy_manager generate-test-cert -s <manager-2-fqdn>,<manager-2-private-ip>,<manager-2-public-ip> cfy_manager generate-test-cert -s <manager-3-fqdn>,<manager-3-private-ip>,<manager-3-public-ip> cfy_manager generate-test-cert -s <broker-1-fqdn>,<broker-1-private-ip>,<broker-1-public-ip> cfy_manager generate-test-cert -s <broker-2-fqdn>,<broker-2-private-ip>,<broker-2-public-ip> cfy_manager generate-test-cert -s <broker-3-fqdn>,<broker-3-private-ip>,<broker-3-public-ip> cfy_manager generate-test-cert -s <db-1-fqdn>,<db-1-private-ip>,<db-1-public-ip> cfy_manager generate-test-cert -s <db-2-fqdn>,<db-2-private-ip>,<db-2-public-ip> cfy_manager generate-test-cert -s <db-3-fqdn>,<db-3-private-ip>,<db-3-public-ip>
Copy the relevant certificates/keys from the ‘$HOME/.cloudify-test-ca/’ on host where the certificates were generated to the other hosts in the cluster.
The following shows an example of the test certificates:
Example certificates
cfy_manager generate-test-cert -s hostname1.example.com,192.0.2.1,203.0.113.1
cfy_manager generate-test-cert -s hostname2.example.com,192.0.2.2,203.0.113.2
cfy_manager generate-test-cert -s hostname3.example.com,192.0.2.3,203.0.113.3
cfy_manager generate-test-cert -s hostname4.example.com,192.0.2.4,203.0.113.4
cfy_manager generate-test-cert -s hostname5.example.com,192.0.2.5,203.0.113.5
cfy_manager generate-test-cert -s hostname6.example.com,192.0.2.6,203.0.113.6
cfy_manager generate-test-cert -s hostname7.example.com,192.0.2.7,203.0.113.7
cfy_manager generate-test-cert -s hostname8.example.com,192.0.2.8,203.0.113.8
cfy_manager generate-test-cert -s hostname9.example.com,192.0.2.9,203.0.113.9
Open Ports for Network Access
For proper network communication, open the posts listed below on all nodes.
firewall-cmd --permanent --add-port=22/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=2379/tcp
firewall-cmd --permanent --add-port=2380/tcp
firewall-cmd --permanent --add-port=5432/tcp
firewall-cmd --permanent --add-port=8008/tcp
firewall-cmd --permanent --add-port=8009/tcp
firewall-cmd --permanent --add-port=4369/tcp
firewall-cmd --permanent --add-port=5672/tcp
firewall-cmd --permanent --add-port=25672/tcp
firewall-cmd --permanent --add-port=35672/tcp
firewall-cmd --permanent --add-port=15672/tcp
firewall-cmd --permanent --add-port=61613/tcp
firewall-cmd --permanent --add-port=1883/tcp
firewall-cmd --permanent --add-port=15674/tcp
firewall-cmd --permanent --add-port=15675/tcp
firewall-cmd --permanent --add-port=15692/tcp
firewall-cmd --permanent --add-port=5671/tcp
firewall-cmd --permanent --add-port=22000/tcp
firewall-cmd --permanent --add-port=53333/tcp
firewall-cmd --permanent --add-port=25671/tcp
firewall-cmd --permanent --add-port=15671/tcp
firewall-cmd --reload
firewall-cmd --list-ports
Installing the Database
Once the preliminary installation tasks are complete, log in as root and run the following steps on each of the database nodes.
- On each database node, use a text editor to create the file /etc/cloudify/db_config.yaml and enter your specific network parameters. Update the fields shown below by replacing the values marked in <> with values for your network.
Note: This must be performed sequentially on each node.
# /etc/cloudify/db_config.yaml
manager:
private_ip: '<private-ip>'
public_ip: '<public-ip>'
postgresql_server:
postgres_password: '<secure-password-like-string>'
cert_path: '<this-node-local-certificate-path>'
key_path: '<this-node-local-private-key-path>'
ca_path: '<local-ca-certificate-path>'
cluster:
nodes:
<database-1-hostname>:
ip: '<database-1-private-ip>'
<database-2-hostname>:
ip: '<database-2-private-ip>'
<database-3-hostname>:
ip: '<database-3-private-ip>'
# Should be the same on all nodes
etcd:
cluster_token: '<secure-password-like-string>'
root_password: '<secure-password-like-string>'
patroni_password: '<secure-password-like-string>'
# Should be the same on all nodes
patroni:
rest_password: '<secure-password-like-string>'
# Should be the same on all nodes
postgres:
replicator_password: '<secure-password-like-string>'
# For monitoring service(status reporter)
prometheus:
credentials:
username: '<username>'
password: '<secure-password-like-string>'
cert_path: '<this-node-local-certificate-path>'
key_path: '<this-node-local-private-key-path>'
ca_path: '<local-ca-certificate-path>'
postgres_exporter:
# `password` is a placeholder and will be updated during config file rendering, based on postgresql_server.postgres_password
password: '<secure-password-like-string>'
sslmode: require
services_to_install:
- database_service
- monitoring_service
On each node, enter the following to run the installation process.
cfy_manager install -c /etc/cloudify/db_config.yaml
Example process for Node 1:
# /etc/cloudify/db_config.yaml
manager:
private_ip: '192.0.2.7'
public_ip: '203.0.113.7'
postgresql_server:
postgres_password: 'strongserverpassword'
cert_path: '/root/.cloudify-test-ca/hostname7.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname7.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster:
nodes:
hostname7:
ip: '192.0.2.7'
hostname8:
ip: '192.0.2.8'
hostname9:
ip: '192.0.2.9'
# Should be the same on all nodes
etcd:
cluster_token: 'clustertoken'
root_password: 'strongrootpassword'
patroni_password: 'strongpatronipassword'
# Should be the same on all nodes
patroni:
rest_password: 'strongrestpassword'
# Should be the same on all nodes
postgres:
replicator_password: 'strongreplicatorpassword'
# For monitoring service(status reporter)
prometheus:
credentials:
username: 'prometheusadmin'
password: 'strongprometheuspassword'
cert_path: '/root/.cloudify-test-ca/hostname7.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname7.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
postgres_exporter:
# `password` is a placeholder and will be updated during config file rendering, based on postgresql_server.postgres_password
password: 'strongpassword'
sslmode: require
services_to_install:
- database_service
- monitoring_service
cfy_manager install -c /etc/cloudify/db_config.yaml
Example process for Node 2:
# /etc/cloudify/db_config.yaml
manager:
private_ip: '192.0.2.8'
public_ip: '203.0.113.8'
postgresql_server:
postgres_password: 'strongserverpassword'
cert_path: '/root/.cloudify-test-ca/hostname8.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname8.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster:
nodes:
hostname7:
ip: '192.0.2.7'
hostname8:
ip: '192.0.2.8'
hostname9:
ip: '192.0.2.9'
# Should be the same on all nodes
etcd:
cluster_token: 'clustertoken'
root_password: 'strongrootpassword'
patroni_password: 'strongpatronipassword'
# Should be the same on all nodes
patroni:
rest_password: 'strongrestpassword'
# Should be the same on all nodes
postgres:
replicator_password: 'strongreplicatorpassword'
# For monitoring service(status reporter)
prometheus:
credentials:
username: 'prometheusadmin'
password: 'strongprometheuspassword'
cert_path: '/root/.cloudify-test-ca/hostname8.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname8.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
postgres_exporter:
# `password` is a placeholder and will be updated during config file rendering, based on postgresql_server.postgres_password
password: 'strongpassword'
sslmode: require
services_to_install:
- database_service
- monitoring_service
cfy_manager install -c /etc/cloudify/db_config.yaml
Example process for Node 3:
# /etc/cloudify/db_config.yaml
manager:
private_ip: '192.0.2.9'
public_ip: '203.0.113.9'
postgresql_server:
postgres_password: 'strongserverpassword'
cert_path: '/root/.cloudify-test-ca/hostname9.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname9.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster:
nodes:
hostname7:
ip: '192.0.2.7'
hostname8:
ip: '192.0.2.8'
hostname9:
ip: '192.0.2.9'
# Should be the same on all nodes
etcd:
cluster_token: 'clustertoken'
root_password: 'strongrootpassword'
patroni_password: 'strongpatronipassword'
# Should be the same on all nodes
patroni:
rest_password: 'strongrestpassword'
# Should be the same on all nodes
postgres:
replicator_password: 'strongreplicatorpassword'
# For monitoring service(status reporter)
prometheus:
credentials:
username: 'prometheusadmin'
password: 'strongprometheuspassword'
cert_path: '/root/.cloudify-test-ca/hostname9.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname9.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
postgres_exporter:
# `password` is a placeholder and will be updated during config file rendering, based on postgresql_server.postgres_password
password: 'strongpassword'
sslmode: require
services_to_install:
- database_service
- monitoring_service
cfy_manager install -c /etc/cloudify/db_config.yaml
Installing the Broker
Once the database is installed on each node, log in as root and run the following steps on each of the broker nodes.
- On each broker node, use a text editor to create the file /etc/cloudify/rabbitmq_config.yaml and enter your specific network parameters. Update the fields shown below by replacing the values marked in <> with values for your network.
Note: This must be performed sequentially on each node.
# /etc/cloudify/rabbitmq_config.yaml
manager:
private_ip: '<private-ip>'
public_ip: '<public-ip>'
rabbitmq:
username: '<username>'
password: '<secure-password-like-string>'
cluster_members:
<broker-1-hostname>:
networks:
default: '<broker-1-ip>'
<broker-2-hostname>:
networks:
default: '<broker-2-ip>'
<broker-3-hostname>:
networks:
default: '<broker-3-ip>'
cert_path: '<this-node-local-certificate-path>'
key_path: '<this-node-local-private-key-path>'
ca_path: '<local-ca-certificate-path>'
nodename: '<this-node-hostname>'
join_cluster: '<node-1 hostname; **should be left blank on node-1**>'
# Should be the same on all nodes
erlang_cookie: '<secure-password-like-string>'
# For monitoring service(status reporter)
prometheus:
credentials:
username: '<username>'
password: '<secure-password-like-string>'
cert_path: '<this-node-local-certificate-path>'
key_path: '<this-node-local-private-key-path>'
ca_path: '<local-ca-certificate-path>'
services_to_install:
- queue_service
- monitoring_service
On each broker node, enter the following to run the installation process.
cfy_manager install -c /etc/cloudify/rabbitmq_config.yaml
Example process for Node 1:
# /etc/cloudify/rabbitmq_config.yaml
manager:
private_ip: '192.0.2.4'
public_ip: '203.0.113.4'
rabbitmq:
username: 'rabbitmqadminuername'
password: 'strongadminpassword'
cluster_members:
hostname4:
networks:
default: '192.0.2.4'
hostname5:
networks:
default: '192.0.2.5'
hostname6:
networks:
default: '192.0.2.6'
cert_path: '/root/.cloudify-test-ca/hostname4.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname4.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
nodename: 'hostname4'
# Should be the same on all nodes
erlang_cookie: 'cookiename'
# For monitoring service(status reporter)
prometheus:
credentials:
username: 'prometheusadmin'
password: 'strongprometheuspassword'
cert_path: '/root/.cloudify-test-ca/hostname4.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname4.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
services_to_install:
- queue_service
- monitoring_service
cfy_manager install -c /etc/cloudify/rabbitmq_config.yaml
Example process for Node 2:
# /etc/cloudify/rabbitmq_config.yaml
manager:
private_ip: '192.0.2.5'
public_ip: '203.0.113.5'
rabbitmq:
username: 'rabbitmqadminuername'
password: 'strongadminpassword'
cluster_members:
hostname4:
networks:
default: '192.0.2.4'
hostname5:
networks:
default: '192.0.2.5'
hostname6:
networks:
default: '192.0.2.6'
cert_path: '/root/.cloudify-test-ca/hostname5.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname5.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
nodename: 'hostname5'
join_cluster: 'hostname4'
# Should be the same on all nodes
erlang_cookie: 'cookiename'
# For monitoring service(status reporter)
prometheus:
credentials:
username: 'prometheusadmin'
password: 'strongprometheuspassword'
cert_path: '/root/.cloudify-test-ca/hostname5.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname5.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
services_to_install:
- queue_service
- monitoring_service
cfy_manager install -c /etc/cloudify/rabbitmq_config.yaml
Example process for Node 3:
# /etc/cloudify/rabbitmq_config.yaml
manager:
private_ip: '192.0.2.6'
public_ip: '203.0.113.6'
rabbitmq:
username: 'rabbitmqadminuername'
password: 'strongadminpassword'
cluster_members:
hostname4:
networks:
default: '192.0.2.4'
hostname5:
networks:
default: '192.0.2.5'
hostname6:
networks:
default: '192.0.2.6'
cert_path: '/root/.cloudify-test-ca/hostname6.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname6.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
nodename: 'hostname6'
join_cluster: 'hostname4'
# Should be the same on all nodes
erlang_cookie: 'cookiename'
# For monitoring service(status reporter)
prometheus:
credentials:
username: 'prometheusadmin'
password: 'strongprometheuspassword'
cert_path: '/root/.cloudify-test-ca/hostname6.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname6.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
services_to_install:
- queue_service
- monitoring_service
cfy_manager install -c /etc/cloudify/rabbitmq_config.yaml
Installing the Manager
Once the broker is installed on each node, log in as root and run the following steps on each of the manager nodes.
- On each manager node, use a text editor to create the file /etc/cloudify/manager_config.yaml and enter your specific network parameters. Update the fields shown below by replacing the values marked in <> with values for your network.
Note: This must be performed sequentially on each node.
# /etc/cloudify/manager_config.yaml
manager:
private_ip: '<private-ip>'
public_ip: '<public-ip>'
security:
ssl_enabled: true
admin_password: '<secure-password-like-string>'
cloudify_license_path: '<cloudify-license-path>'
monitoring:
username: '<username>'
password: '<secure-password-like-string>'
rabbitmq:
username: '<username>'
password: '<secure-password-like-string>'
ca_path: '<ca-path>'
cluster_members:
<broker-1-hostname>:
networks:
default: '<broker-1-ip>'
<broker-2-hostname>:
networks:
default: '<broker-2-hostname>'
<broker-3-hostname>:
networks:
default: '<broker-3-ip>'
monitoring:
username: '<username>'
password: '<secure-password-like-string>'
postgresql_server:
postgres_password: '<secure-password-like-string>'
ca_path: '<local-ca-certificate-path>'
cluster:
nodes:
<database-1-hostname>:
ip: '<database-1-ip>'
<database-2-hostname>:
ip: '<database-2-ip>'
<database-3-hostname>:
ip: '<database-3-ip>'
postgresql_client:
ssl_enabled: true
server_password: '<secure-password-like-string>'
ssl_client_verification: true
monitoring:
username: '<username>'
password: '<secure-password-like-string>'
ssl_inputs:
internal_cert_path: '<this-node-local-certificate-path>'
internal_key_path: '<this-node-local-private-key-path>'
external_cert_path: '<this-node-local-certificate-path>'
external_key_path: '<this-node-local-private-key-path>'
ca_cert_path: '<local-ca-certificate-path>'
external_ca_cert_path: '<local-ca-certificate-path>'
postgresql_client_cert_path: '<this-node-local-certificate-path>'
postgresql_client_key_path: '<this-node-local-private-key-path>'
# For monitoring service(status reporter)
prometheus:
blackbox_exporter:
ca_cert_path: '<this-node-local-private-key-path>'
credentials:
username: '<username>'
password: '<secure-password-like-string>'
cert_path: '<this-node-local-certificate-path>'
key_path: '<this-node-local-private-key-path>'
ca_path: '<local-ca-certificate-path>'
services_to_install:
- manager_service
- monitoring_service
On each manager node, enter the following to run the installation process.
cfy_manager install -c /etc/cloudify/manager_config.yaml
Example manager_config.yaml for Node 1:
# /etc/cloudify/manager_config.yaml
manager:
private_ip: '192.0.2.1'
public_ip: '203.0.113.1'
security:
ssl_enabled: true
admin_password: 'strongadminpasswoard'
cloudify_license_path: '/root/cloudify/license.yaml'
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
rabbitmq:
username: 'rabbitmqadminuername'
password: 'strongadminpassword'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster_members:
hostname4:
networks:
default: '192.0.2.4'
hostname5:
networks:
default: '192.0.2.5'
hostname6:
networks:
default: '192.0.2.6'
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
postgresql_server:
postgres_password: 'strongadminpassword'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster:
nodes:
hostname7:
ip: '192.0.2.7'
hostname8:
ip: '192.0.2.8'
hostname9:
ip: '192.0.2.9'
postgresql_client:
ssl_enabled: true
server_password: 'strongserverpassword'
ssl_client_verification: true
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
ssl_inputs:
internal_cert_path: '/root/.cloudify-test-ca/hostname1.example.com.crt'
internal_key_path: '/root/.cloudify-test-ca/hostname1.example.com.key'
external_cert_path: '/root/.cloudify-test-ca/hostname1.example.com.crt'
external_key_path: '/root/.cloudify-test-ca/hostname1.example.com.key'
ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
external_ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
postgresql_client_cert_path: '/root/.cloudify-test-ca/hostname1.example.com.crt'
postgresql_client_key_path: '/root/.cloudify-test-ca/hostname1.example.com.key'
# For monitoring service(status reporter)
prometheus:
blackbox_exporter:
ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
credentials:
username: 'adminusername'
password: 'strongadminpassword'
cert_path: '/root/.cloudify-test-ca/hostname1.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname1.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
services_to_install:
- manager_service
- monitoring_service
cfy_manager install -c /etc/cloudify/manager_config.yaml
Example manager_config.yaml for Node 2:
# /etc/cloudify/manager_config.yaml
manager:
private_ip: '192.0.2.2'
public_ip: '203.0.113.2'
security:
ssl_enabled: true
admin_password: 'strongadminpasswoard'
cloudify_license_path: '/root/cloudify/license.yaml'
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
rabbitmq:
username: 'rabbitmqadminuername'
password: 'strongadminpassword'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster_members:
hostname4:
networks:
default: '192.0.2.4'
hostname5:
networks:
default: '192.0.2.5'
hostname6:
networks:
default: '192.0.2.6'
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
postgresql_server:
postgres_password: 'strongadminpassword'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster:
nodes:
hostname7:
ip: '192.0.2.7'
hostname8:
ip: '192.0.2.8'
hostname9:
ip: '192.0.2.9'
postgresql_client:
ssl_enabled: true
server_password: 'strongserverpassword'
ssl_client_verification: true
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
ssl_inputs:
internal_cert_path: '/root/.cloudify-test-ca/hostname2.example.com.crt'
internal_key_path: '/root/.cloudify-test-ca/hostname2.example.com.key'
external_cert_path: '/root/.cloudify-test-ca/hostname2.example.com.crt'
external_key_path: '/root/.cloudify-test-ca/hostname2.example.com.key'
ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
external_ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
postgresql_client_cert_path: '/root/.cloudify-test-ca/hostname2.example.com.crt'
postgresql_client_key_path: '/root/.cloudify-test-ca/hostname2.example2.com.key'
# For monitoring service(status reporter)
prometheus:
blackbox_exporter:
ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
credentials:
username: 'adminusername'
password: 'strongadminpassword'
cert_path: '/root/.cloudify-test-ca/hostname2.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname2.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
services_to_install:
- manager_service
- monitoring_service
cfy_manager install -c /etc/cloudify/manager_config.yaml
Example manager_config.yaml for Node 3:
# /etc/cloudify/manager_config.yaml
manager:
private_ip: '192.0.2.3'
public_ip: '203.0.113.3'
security:
ssl_enabled: true
admin_password: 'strongadminpasswoard'
cloudify_license_path: '/root/cloudify/license.yaml'
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
rabbitmq:
username: 'rabbitmqadminuername'
password: 'strongadminpassword'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster_members:
hostname4:
networks:
default: '192.0.2.4'
hostname5:
networks:
default: '192.0.2.5'
hostname6:
networks:
default: '192.0.2.6'
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
postgresql_server:
postgres_password: 'strongadminpassword'
ca_path: '/root/.cloudify-test-ca/ca.crt'
cluster:
nodes:
hostname7:
ip: '192.0.2.7'
hostname8:
ip: '192.0.2.8'
hostname9:
ip: '192.0.2.9'
postgresql_client:
ssl_enabled: true
server_password: 'strongserverpassword'
ssl_client_verification: true
monitoring:
username: 'adminusername'
password: 'strongadminpassword'
ssl_inputs:
internal_cert_path: '/root/.cloudify-test-ca/hostname3.example.com.crt'
internal_key_path: '/root/.cloudify-test-ca/hostname3.example.com.key'
external_cert_path: '/root/.cloudify-test-ca/hostname3.example.com.crt'
external_key_path: '/root/.cloudify-test-ca/hostname3.example.com.key'
ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
external_ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
postgresql_client_cert_path: '/root/.cloudify-test-ca/hostname3.example.com.crt'
postgresql_client_key_path: '/root/.cloudify-test-ca/hostname3.example.com.key'
# For monitoring service(status reporter)
prometheus:
blackbox_exporter:
ca_cert_path: '/root/.cloudify-test-ca/ca.crt'
credentials:
username: 'adminusername'
password: 'strongadminpassword'
cert_path: '/root/.cloudify-test-ca/hostname3.example.com.crt'
key_path: '/root/.cloudify-test-ca/hostname3.example.com.key'
ca_path: '/root/.cloudify-test-ca/ca.crt'
services_to_install:
- manager_service
- monitoring_service
cfy_manager install -c /etc/cloudify/manager_config.yaml
Post Installation
Once the database, broker, and manager are installed on each node, perform the following from the manager node.
cfy cluster db-nodes list
cfy cluster brokers list
cfy cluster managers list
cfy cluster status
Manual Installation Requirements for Air-Gapped Operation
Use these steps if air-gapped opertation of the site map is required after manual installation.
Changing the map before installation
- Install the Database and Rabbitmq as described above
On each host add the stage>maps sections at the top of the “/etc/cloudify/manager_config.yaml” file. For example:
stage: # If set to true, Cloudify UI will not be installed skip_installation: false # Additional environment variables to add to stage's service file. extra_env: {} # LeafletJS map configuration, see http://leaflet-extras.github.io/leaflet-providers/preview/ # for allowed TILES URL templates and Attribution values maps: # Template map tiles provider URL, in format: # 'https://tiles.stadiamaps.com/tiles/osm_bright/${z}/${x}/${y}${r}.png' tilesUrlTemplate: 'http://127.0.0.1:8080/styles/basic-preview/${z}/${x}/${y}.png' # Attribution data to be displayed as small text box on a map, HTML allowed, it is required # by some map providers, check https://leaflet-extras.github.io/leaflet-providers/preview/ attribution: 'My custom map' # API key to be passed to map tile tiles provider accessToken: null
After changing the file in all hosts, run the command to install the manager
cfy_manager install -c /etc/cloudify/manager_config.yaml --verbose
Open the Conductor UI and verify that the map is the correct
Changing the map after installation
Option 1: Resistant change
- Install the cluster as described above
- Edit the “
/etc/cloudify/manager_config.yaml
” file as described in Changing the map before installation. On each host run the command to perform the changes
cfy_manager install -c /etc/cloudify/manager_config.yaml --verbose
Known issues
- If you are changing the map after installation, it is not possible to revert to the default map configuration by removing the stage section. If a new change is needed, the map has to be configured again as described above.
If the maps are not showing the same information in all hosts, make sure the stage>maps section is equal in all “/etc/cloudify/manager_config.yaml” files and run the command in all hosts again
cfy_manager install c /etc/cloudify/manager_config.yaml --verbose
Option 2: Ephemeral change, lost after cluster upgrade
Changing the map after installation (ephemeral change)
- Same steps for AIO installation.
- Notes:
- Even when changing the map in a single host, is it possible to create sites and deploy a blueprint using this site. The deployment is shown correctly in both updated and out-of-date maps.
- If one of the map servers are down (update or out-of-date) it affects just the related host, not all hosts.
- If changing just one of the “/opt/cloudify-stage/dist/userData/userConfig.json” files, it is synchronized to all hosts. To complete the changes in Conductor UI on all hosts it is necessary to run “supervisorctl restart cloudify-stage” on each host.
- The sites and deployments created before the site update are unchanged.
Other known issues
- ERROR: Cannot fetch map tiles. Error: certificate is not yet valid
- If the map is not shown, check the log “tail -f /var/log/cloudify/stage/server*“. If the following error appears
ERROR: Cannot fetch map tiles. Error: certificate is not yet valid
, it is possible that the date and time of the Conductor host are not correct. A possible solution is to either fix the date and time or restart the Conductor host.
- If the map is not shown, check the log “tail -f /var/log/cloudify/stage/server*“. If the following error appears