Conductor Documentation

Installing Containerized Wind River Conductor (Helm)

Overview

Use the following step to deploy a Conductor Manager to Kubernetes with a helm chart.

Configure Conductor Helm values

  1. Get the wind-river-conductor-23.9.0.tgz chart file from licensed Wind River registry.
  2. Export helm chart values to find customizable fields with helm show values wind-river-conductor-23.9.0.tgz > values.yaml
  3. Create an override-values.yaml as described in the next sections and customize the fields available at the values.yaml exported above.
    3.1. To run on a WRCP host, for example, you need to add cephfs to all occurrences of storageClass and storageClassName in that file.
  4. To be able to access Conductor UI and external CLI, configure Ingress service.

Configure Ingress service

Ingress without hostname (Nginx example)

To access Conductor UI and CLI without a hostname, using directly the IP address where it’s installed, use the following:

# override-values.yaml
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx

Ingress with hostname (Nginx example)

To expose Conductor through domain hostname, make sure to have it on network’s DNS server or add it to system’s hosts file.

# override-values.yaml
ingress:
  enabled: true
  host: mydomain-conductor.com
  annotations:
    kubernetes.io/ingress.class: nginx
# /etc/hosts Linux example
10.10.100.50  mydomain-conductor.com

Expose Ingress for WRCP Systems (Nginx)

After following previous section, it’s necessary to create a Global Network Policy to allow traffic on HTTP port 80. For HTTPS access, user must setup production-ready certificates and customize Conductor helm values.

Create the following global-networkpolicy.yaml on the WRCP controller host and run kubectl apply -f global-networkpolicy.yaml. This will allow ingress traffic through OAM IP address.

# This rule opens up default HTTP port 80
# To apply use:
# kubectl apply -f global-networkpolicy.yaml
apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
  name: gnp-oam-overrides
spec:
  ingress:
  - action: Allow
    destination:
      ports:
      - 80
    protocol: TCP
  order: 500
  selector: has(iftype) && iftype == 'oam'
  types:
  - Ingress

Example of override-values.yaml

# override-values.yaml

composer_backend:
  image: <wind-river-registry-url>/conductor/composer-backend:<tag>

composer_frontend:
  image: <wind-river-registry-url>/conductor/composer-frontend:<tag>

execution_scheduler:
  image: <wind-river-registry-url>/conductor/execution-scheduler:<tag>

mgmtworker:
  image: <wind-river-registry-url>/conductor/mgmtworker:<tag>

rabbitmq:
  image: <wind-river-registry-url>/conductor/rabbitmq:<tag>
  pvc:
    storageClassName: "cephfs"

rest_service:
  image: <wind-river-registry-url>/conductor/restservice:<tag>

api_service:
  image: <wind-river-registry-url>/conductor/apiservice:<tag>

stage_backend:
  image: <wind-river-registry-url>/conductor/stage-backend:<tag>

stage_frontend:
  image: <wind-river-registry-url>/conductor/stage-frontend:<tag>

seaweedfs:
  master:
    data:
      storageClass: "cephfs"
  filer:
    data:
      storageClass: "cephfs"
  volume:
    data:
      storageClass: "cephfs"

postgresql:
  pvc:
    storageClassName: "cephfs"

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx

Install Conductor Helm application

After configuring override-values.yaml file, run the following command to install:

helm install wind-river-conductor -f ./override-values.yaml ./wind-river-conductor-23.9.0.tgz

The installation is complete when all pods are running as shown in the example below.
If the Ingress service is configured correctly, Conductor UI should be available at http://<helm-host-ip>/console

kubectl get pods
NAME                                  READY   STATUS    RESTARTS        AGE
api-service-6594cc9f9c-lmnvs          1/1     Running   0               39m
composer-backend-749dc5c96-n27wt      1/1     Running   0               39m
composer-frontend-8654f4df4-6p8hq     1/1     Running   0               39m
execution-scheduler-987f86d84-f5lww   1/1     Running   0               39m
mgmtworker-68b99d7b57-lcxjc           1/1     Running   2 (4m15s ago)   39m
nginx-9df7449b-4tv4x                  1/1     Running   7 (20m ago)     39m
postgresql-7b8dccc6b8-ck5fx           1/1     Running   0               39s
prometheus-85f7986b75-72dqz           2/2     Running   0               39m
rabbitmq-dcfbf65fb-gtl6x              1/1     Running   0               39m
rest-service-77c76d4d49-4d9tn         1/1     Running   0               39m
seaweedfs-filer-0                     1/1     Running   0               39m
seaweedfs-master-0                    1/1     Running   0               39m
seaweedfs-s3-558f4b67f5-svjvp         1/1     Running   0               39m
seaweedfs-volume-0                    1/1     Running   0               39m
stage-backend-7f8d78c865-xnjxc        1/1     Running   0               39m
stage-frontend-865fb6b5b9-wglkd       1/1     Running   0               39m

Accessing Conductor external CLI

Install the package using Python package manager, optionally on a virtualenv:

# virtualenv Python 3.6.15
(venv)> pip install --upgrade pip
(venv)> pip install cloudify
(venv)> cfy --version
Cloudify CLI 7.0.2

(venv)> cfy profiles use 10.10.100.50 -u admin -p admin
Attempting to connect to 10.10.100.50 through port 80, using http (SSL mode: False)...
Using manager 10.10.100.50 with port 80
Initializing profile 10.10.100.50...
Initialization completed successfully
It is highly recommended to have more than one manager in a Cloudify cluster
Adding cluster node localhost to local profile manager cluster
Adding cluster node rabbitmq to local profile broker cluster
Profile is up to date with 2 nodes

(venv)> cfy profiles list
Listing all profiles...

Profiles:
+----------------+---------------+------------------+----------------+----------+---------+----------+--------------+-----------+---------------+------------------+
|      name      |   manager_ip  | manager_username | manager_tenant | ssh_user | ssh_key | ssh_port | kerberos_env | rest_port | rest_protocol | rest_certificate |
+----------------+---------------+------------------+----------------+----------+---------+----------+--------------+-----------+---------------+------------------+
| *10.10.100.50  | 10.10.100.50  |      admin       |                |          |         |    22    |    False     |     80    |      http     |                  |
+----------------+---------------+------------------+----------------+----------+---------+----------+--------------+-----------+---------------+------------------+

(venv)> cfy status
Retrieving manager services status... [ip=10.10.100.50]

Services:
+--------------------------------+--------+
|            service             | status |
+--------------------------------+--------+
| Webserver                      | Active |
| Management Worker              | Active |
| Manager Rest-Service           | Active |
| Cloudify API                   | Active |
| Cloudify Execution Scheduler   | Active |
| Cloudify Console               | Active |
| PostgreSQL                     | Active |
| RabbitMQ                       | Active |
| Cloudify Composer              | Active |
| Monitoring Service             | Active |
+--------------------------------+--------+

Uninstalling Helm

To uninstall Helm, use the following command:

helm uninstall wind-river-conductor

All pods need to be deleted.

Note:

After uninstalling Helm, the PersistentVolumeClaims (PVCs) and PersistentVolumes (PVs) related to seaweed are not deleted.

kubectl get pvc
NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-default-seaweedfs-master-0   Bound    pvc-58567e68-f026-4106-bd16-5bde940093a0   1Gi        RWO            standard       42m
data-filer-seaweedfs-filer-0      Bound    pvc-8b14b136-af4f-4746-ae54-76c104a7e62f   1Gi        RWO            standard       42m
data-seaweedfs-volume-0           Bound    pvc-3d9fb8dd-9a56-4e0c-bca1-df89de082ef8   10Gi       RWO            standard       42m

The Seaweed PV/PVCs are based on the StatefulSet persistentVolumeClaimTemplate which defaults to a reclaim policy of “Retain”. This would need to be configured as “Delete” for the PVs and PVCs to be automatically cleaned up. Seaweed’s Helm chart does not currently expose a parameter to change the reclaim policy. Conductor’s PVC for Postgres and RabbitMQ are created and managed by Helm. Therefore, these resources are being removed by Helm which is why we see a difference in behavior.
The Seaweed PVCs/PVs can be re-used if the Helm deployment is performed again or they can be cleaned up manually as a post-removal step if Conductor is to be removed permanently or a full cleanup is necessary.