WRCP Plugin
Introduction
WRCP Plugin enables users to orchestrate WRCP workflows from the manager.
NOTE:
- The WRCP Plugin is designed to work with the WRCP blueprint, which can be download from WRC marketplace.
- Do not load the WRCP plugin using cfy plugins bundle-upload. The correct WRCP plugin must be downloaded from: delivers.windriver.com under WRCP2409-Modules, refer to Download from WINDSHARE for further instructions:
- windriver-WRCP-plugin.zip
- conductor-redfish-plugin.zip
- conductor-wrcp-tester-plugin.zip
- conductor-goldenconfig-plugin.zip
WRCP Management
Each item on the following list is materialized as a workflow in the plugin.
Prerequisites
All stories require that you provide secrets for the following values:
- user_secret: The default secret name is wrcp_username. This is the username of the WRCP system APIs.
- password_secret: The default secret name is wrcp_api_key. This is the password to your WRCP system APIs.
- cacert: The default secret name is wrcp_cacert. This is the content of the certificate authority file.
- ssh_username: The default secret name is wrcp_ssh_username. The SSH username configured for connection.
- ssh_password: The default secret name is wrcp_ssh_password. The SSH password configured for connection.
You are free to change the names of the secrets, however, you must provide the secret names in the deployment inputs when enrolling a new system.
- Enroll WRCP System Controller: You will need the auth_url of the controller, for example, https://123.123.123.123:5000/v3.
- Enroll WRCP Subcloud: You will need the auth_url of the controller, for example, https://123.123.123.123:5000/v3. You will also need the subcloud’s region name to provide it as the region_name input.
Node Types
cloudify.nodes.WRCP.WRCP This node represents a WRCP System. A system can be a System Controller, Standalone system, or a Subcloud.
Properties
- use_external_resource: Always true. This parameters indicates that the resource already exists.
- client_config: A dictionary containing the following keys:
- auth_url: The WRCP system API endpoint, including the protocol, IP, port, and path, e.g. https://123.4.5.6 IPv6 systems’ auth urls, should be provided with the IP surrounded by square brackets, e.g. https://[1234:abcd:efgh:1a2b:0000:test:fake:4321]:5000/v3.
- username: The WRCP username.
- api_key: The WRCP password.
- project_name: The client project name, default: admin.
- user_domain_name: The client user domain name, default: Default.
- project_domain_name: The client project domain name, default: Default.
- region_name: The region name, either RegionOne for a system controller, or the subcloud’s region name.
- insecure: Whether to ignore certification validation when using https as the protocol.
- cacert: The content of a Certificate Authority when using https as the protocol.
- resource_config: Parameters that describe the system in the WRCP API. Currently, there is no need to provide these parameters.
Runtime Properties:
- subclouds: A dictionary of subclouds if the node is a system controller that has subclouds.
- A key containing a number representing the subcloud number:
- external_id: The number representing the subcloud number.
- name: The subcloud name.
- description: The subcloud descriptions.
- location: The subcloud location, for example an address.
- group_id: The subcloud’s group ID.
- group_name: The subcloud’s group name.
- oam_floating_ip: The IP of the subcloud.
- management_state: The subcloud’s management state.
- A key containing a number representing the subcloud number:
- resource_config:
- external_id: The system’s ID.
- name: The system’s name.
- description: The system’s description.
- location: The system’s location, for example an address.
- system_type: The system type, for example, “all-in-one”.
- system_mode: The system mode, for example, “simplex”.
- region_name: The region name, for example “RegionOne”, or if a subcloud, the subcloud’s region name.
- latitude: The latitude of the system.
- longitude: The longitude of the system.
- distributed_cloud_role: The distributed_cloud_role, for example “subcloud”.
- hosts:
- A key containing an UUID representing the host ID.
- hostname: The host’s hostname.
- personality: The host’s personality, for example “controller”.
- capabilities: The host’s capabilities.
- subfunctions: The host’s subfunctions.
- kube_clusters: The API response for the system’s kube_cluster object.
- A key containing an UUID representing the host ID.
- WRA:
- status: Status of WRA application in WRCP , e.g. uploading, uploaded, applying, applied, updating, apply-failed or upload-failed.
- app_version: Last version of WRA installed on WRCP, e.g. 22.12-1
- last_update: Last action made by WRC, including date, e.g. “uninstall on 2023-06-20 12:34:56”
- subcloud_names: A list containing only the subcloud’s names
- k8s_cluster_name: If the system has a Kubernetes cluster, the kube cluster’s name.
- k8s_admin_user:
- k8s_ip: The IP of the Kubernetes cluster, if the system has one.
- k8s_service_account_token: The service account token, if the system has a Kubernetes Cluster.
- k8s_cacert: The Kubernetes Cluster certificate authority if the system has a Kubernetes Cluster.
- k8s_admin_client_cert: The Kubernetes Cluster client certificate if the system has a Kubernetes Cluster.
- k8s_admin_client_key: The Kubernetes Cluster client key if the system has a Kubernetes Cluster.
- openstack_ip: The system’s Openstack IP if it hosts an Openstack system.
- openstack_key: The system’s Openstack key if it hosts an Openstack system.
- __cert_audit: it is the the list of certificates got from WRCP host. It contains the id, name, expiry date and etc.
- __cert_summary: groups the certificate in “regions”, “expiry_dates” and “types” and is used to build the Certificates Chart
- patches_to_upload: A list of patch_ids to be uploaded.
- load_id:
Labels
The following are possible for a WRCP system deployment:
- csys-obj-type
- csys-env-type
- csys-obj-parent
- csys-location-name
- csys-location-lat
- csys-location-long
- csys-wrcp-services
Sites
During installation the plugin requests a system’s location (latitude and longitude) from the WRCP API, and creates a Site in Conductor. You can see sites’ location on the map on the Dashboard screen.
check_update_status
Checking status of strategy steps and update labels if failed/complete.
Parameters:
type_names:
default: [ ]
description: |
Type names for which the workflow will execute the update
check_update_status operation.
By default the operation will execute for nodes of type
cloudify.nodes.WRCP.WRCP
node_ids:
default: [ ]
description: |
Node template IDs for which the workflow will execute the
check_update_status operation
node_instance_ids:
default: [ ]
description: |
Node instance IDs for which the workflow will execute the
check_update_status operation
refresh_status
Workflow starts check_update_status on each subcloud.
Parameters:
type_names:
default: []
description: |
Type names for which the workflow will execute the refresh_status operation.
By default the operation will execute for nodes of type
cloudify.nodes.WRCP.WRCP
node_ids:
default: []
description: |
Node template IDs for which the workflow will execute the refresh_status
operation
node_instance_ids:
default: []
description: |
Node instance IDs for which the workflow will execute the refresh_status
operation
prestage
ATTENTION: Make sure that every path is able to be accesible through USM, SSH or simple request.
The prestage workflow aims to prepare the target system for an upgrade/update. It’s part of the prestage to load the software, which can be the ISO or a patch and to perform any other required step prior to the actual upgrade/update of the system.
Before running prestage workflow on subclouds make sure that upgrade was executed over the system controller due the fact that no subcloud can have a version number superior than the system
windriver.nodes.controller
. There’s a validation step on prestage workflow to assure the upload of dependency images (those needed for subclouds) if it’s a system controller.
This workflow comes to wrap all operations needed for preparing for an upgrade or upload that was previously happening on the upgrade workflow.
The workflow is also designed to handle the prestage of minor version updates (apply patches):
- If
software_version
is a minor version (e.g. 24.09.1), the prestage will expect thepatch_dir
parameter so every patch file in that directory is uploaded to the WRCP system. - If
software_version
is a major version (e.g. 24.09.0 or simple 24.09), the prestage will expect thelicense_file_path
, theiso_path
and thesig_path
parameters so the major version ISO is uploaded to the WRCP system.- If the system is a system controller, the user can also enable the
subcloud_upgrade
flag and pass theprestage_images
so the required prestaging images are loaded into the system controller and served to the subclouds when they start their own upgrade.
- If the system is a system controller, the user can also enable the
Parameters:
software_version:
type: string
default: ""
description: |
The software version to which will be updated or upgraded
constraints:
- pattern: '^\d+.\d+.\d+$'
type_names:
default: []
description: |
Type names for which the workflow will execute the prestaging operation.
By default the operation will execute for nodes of type
cloudify.nodes.wrcp.WRCP
node_ids:
default: []
description: |
Node template IDs for which the workflow will execute the prestaging
operation
node_instance_ids:
default: []
description: |
Node instance IDs for which the workflow will execute the prestaging
operation
license_file_path:
type: string
default: ''
description: |
File path where the license file is located.
This license file will be applied as a part of upgrade process.
E.g: /opt/mgmtworker/persistent/25.03/license.lic
cfy_user must be able to access this file.
iso_path:
type: string
default: ''
description: |
File path where the ISO with new SW version is located in the manager host.
E.g.: /opt/mgmtworker/25.03/bootimage.iso
Or the URL to the ISO image file with the new SW version. E.g.:
https://build.server:/loadbuild/25.03/bootimage.iso
sig_path:
type: string
default: ''
description: |
File path where the ISO signature is located in the manager host.
E.g.: /opt/mgmtworker/25.03/bootimage.sig
Or the URL to the ISO image signature file. E.g.:
https://build.server:/loadbuild/25.03/bootimage.sig
prestage_images:
type: string
default: ''
description: |
File path where the list of images to be prestaged is located.
E.g.: /opt/mgmtworker/25.03/image_list.lst
cfy_user must be able to access this file.
subcloud_upgrade:
type: boolean
default: True
description: |
Flag to indicate if subclouds should be upgraded.
patch_dir:
default: ''
description: |
The path to a directory on the manager where the patches are located.
The patches will be uploaded and applied.
max_retries:
description: |
The maximum number of retries allowed for operations.
type: integer
default: 60
Upgrade
IMPORTANT: Before start workflow make sure that below files don’t exist on controller-0. These will be automatically copied from the load installed during the upgrade.
- ~/wind-river-cloud-platform-deployment-manager-overrides.yaml
- ~/wind-river-cloud-platform-deployment-manager.tgz
- ~/wind-river-cloud-platform-deployment-manager.yaml
The subclouds have to been provisioned with the Redfish BMC password. If not, before start upgrade you need to update subcloud with install-values.yaml
The prestage workflow needs to be executed prior to the upgrade.
Workflow to upgrade WRCP system components. It contains steps:
upgrade platform:
- create upgrade strategy
- apply the created upgrade strategy
- wait for the applied strategy to complete
upgrade storage:
- check: check whether Trident is being used and also checks whether it is up to date
- health check (pre upgrade): checks whether Trident is in a working state just to make sure it can be safely upgraded
- upgrade: reinstalls Trident to perform an upgrade¹
- health check (post upgrade): verifies whether Trident is in a working state just to make sure the upgrade worked as intended
upgrade kubernetes:
- version check: checks whether the provided version is valid or available, and skips the upgrade in the case Kubernetes is already up to date
- create upgrade kubernetes strategy
- apply the created upgrade strategy
- wait for the applied strategy to complete
[1] When you uninstall Trident, the Persistent Volume Claim (PVC) and Persistent Volume (PV) used by the Astra Trident deployment are not deleted. PVs that have already been provisioned will remain available while Astra Trident is offline, and Astra Trident will provision volumes for any PVCs that are created in the interim once it is back online.
Parameters:
type_names:
default: []
description: |
Type names for which the workflow will execute the upgrade operation.
By default the operation will execute for nodes of a type that
implements the `cloudify.interfaces.upgrade` interface.
node_ids:
default: []
description: |
Node template IDs for which the workflow will execute the upgrade
operation
node_instance_ids:
default: []
description: |
Node instance IDs for which the workflow will execute the upgrade
operation
sw_version:
default: ''
description: |
SW version to upgrade to. Example: 21.12
kubernetes_version:
default: ''
description: |
Kubernetes version to upgrade to. Example: 1.21.8
force_flag:
default: True
description: |
Force upgrade to run. Required if the workflow needs to run while
there are active alarms.
controller_apply_type:
default: 'serial'
description: |
The apply type for controller hosts: serial or ignore.
storage_apply_type:
description: |
The apply type for storage hosts: serial, parallel or ignore.
default: 'serial'
swift_apply_type:
description: |
The apply type for swift hosts: serial, parallel or ignore.
default: 'serial'
worker_apply_type:
description: |
The apply type for worker hosts: serial, parallel or ignore.
default: 'serial'
max_parallel_worker_hosts:
description: |
The maximum number of worker hosts to patch in parallel; only applicable if worker-apply-type = parallel. Default value is 2.
default: 2
default_instance_action:
description: |
The default instance action: stop-start or migrate.
default: 'migrate'
alarm_restrictions:
description: |
The strictness of alarm checks: strict or relaxed.
default: 'strict'
max_retries:
description: |
The maximum number of retries allowed for operations.
type: integer
default: 60
Running upgrade workflow on selected components
The upgrade workflow runs on any node which type implements the cloudify.interfaces.upgrade interface.
Node Types
windriver.nodes.wrcp.Infrastructure This node represents the WRCP platform. For the upgrade workflow, this node is equivalent to the previous versions of cloudify.nodes.WRCP.WRCP
. The cloudify.nodes.WRCP.WRCP
node will not be able to run the “upgrade” workflows anymore.
windriver.nodes.wrcp.Storage This node represents K8s storage.
windriver.nodes.wrcp.Kubernetes This node represents K8s cluster.
Note: These new node types were introduced to allow the upgrade of WRCP components in sequence respecting inter-component dependencies.
By default, the workflow will perform the upgrade following the order in this list. The user can choose which components to upgrade by informing the target note types in type_names execution parameter. For example, to run the platform upgrade only, the input must be:
type_names: [“windriver.nodes.wrcp.Infrastructure”]
To run storage and kubernetes upgrades only:
type_names: [“windriver.nodes.wrcp.Storage”, “windriver.nodes.wrcp.Kubernetes”]
run_on_subclouds
The run_on_subclouds workflow creates temporary execution group based on provided parameters, start batch execution on created group (run workflow with name workflow_id on all matched deployments) and after finish the workflow (all executions are completed) delete this group.
Currently, the workflow is looking for match in all deployments (including environment and not related deployment) so the user must use labels and filters carefully.
Parameters:
workflow_id:
description: |
The name of the workflow to execute on all matched deployments (deployments group)
default: ''
workflow_inputs:
description: The workflow parameters required during workflow execution
default: {}
labels:
description: The labels on the basis of which deployments are selected to create
a deployment group and start batch execution of workflow on each matched subcloud.
The labels are optional and can be provided parallel with filter_ids
default: {}
filter_ids:
description: |
List of filter ids on the basis of which deployments are selected to create
a deployment group and start batch execution of workflow on each matched subcloud.
The filter_ids can be provided parallel with labels.
The provided id must be really id of filter
default: []
run_on_all_subclouds:
type: boolean
description: |
You can run a workflow on all sub environments.
When the parameters is True, the deployments will be selected based on rule:
{"csys-obj-parent": "<parent_deployment_id>"}
In this case, labels rules and filter_ids will be skipped
default: False
type_names:
default: [ ]
description: |
Type names for which the workflow will execute the workflow,
especially start_subclouds_executions and
wait_for_execution_end_and_delete_deployment_group operation.
By default the operation will execute for nodes of type
cloudify.nodes.WRCP.WRCP
node_ids:
default: [ ]
description: |
Node template IDs for which the workflow will execute the workflow,
especially start_subclouds_executions and
wait_for_execution_end_and_delete_deployment_group operation.
node_instance_ids:
default: [ ]
description: |
Node instance IDs for which the workflow will execute the workflow,
especially start_subclouds_executions and
wait_for_execution_end_and_delete_deployment_group operation.
Kubernetes Cluster Upgrade Automation (upgrade_kubernetes)
Overview
The Kubernetes upgrade is integrated as a new workflow in the WRCP plugin, this workflow can be broken in a series of tasks that need to performed in order:
- Kubernetes Health Check - pre upgrade: Checks whether all the requirements are met and assures no management-affecting alarms were found prior to upgrading. The user may optionally choose to relax these checks using the respective input for that.
- Kubernetes Version Check: Checks whether the provided version is valid or available, and skips the upgrade in the case Kubernetes is already up to date.
- Kubernetes Workflow Strategy Creation: Creates the respective strategy based on all the inputs given by the user.
- Kubernetes Workflow Strategy Application: Applies the respective strategy with “sw-manager kube-upgrade-strategy apply”.
- Kubernetes Health Check - post upgrade: The second health check is performed to assure the upgrade worked as intended and no issues are present afterwards.
Parameters:
type_names:
default: ["windriver.nodes.wrcp.Kubernetes"]
description: |
Do not change!
This workflow runs on nodes of 'type windriver.nodes.wrcp.Kubernetes'
changing this parameter can cause unexpected behavior.
kubernetes_version:
description: 'Specify a target version for Kubernetes Orchestration. Example: v1.21.8'
type: string
default: ''
constraints:
- pattern: '^\d+.\d+.\d+$'
worker_apply_type:
description: >
This option specifies the host concurrency of the Kubernetes version upgrade strategy.
--SERIAL:worker hosts will be patched one at a time;
--PARALLEL:worker hosts will be upgraded in parallel;
--IGNORE:worker hosts will not be upgraded; strategy create will fail
default: serial
constraints:
- valid_values:
- serial
- parallel
- ignore
max_parallel_worker_hosts:
description: 'This option applies to the parallel worker apply type selection to specify the maximum worker hosts to upgrade in parallel'
default: 2
constraints:
- in_range:
- 2
- 10
default_instance_action:
description: >
This option only has significance when the wr-openstack application is loaded and there are instances running on worker hosts.
It specifies how the strategy deals with worker host instances over the strategy execution.
--STOP-START:Instances will be stopped before the host lock operation following the upgrade and then started again following the host unlock;
--MIGRATE:Instances will be stopped before the host lock operation following the upgrade and then started again following the host unlock.
default: stop-start
constraints:
- valid_values:
- stop-start
- migrate
alarm_restrictions:
description: >
This option sets how the Kubernetes version upgrade orchestration behaves when alarms are present.
--STRICT:some basic alarms are ignored;
--RELAXED:Non-management-affecting alarms are ignored.
default: strict
constraints:
- valid_values:
- relaxed
- strict
Note: most of these parameters are directly inserted into WRCP commands, so additional details about them can be found in the platform documentation.
Analytics Deployment Automation (install_wra, upgrade_wra, uninstall_wra)
Install WRA:
The install_wra workflow performs the following steps:
- upload_wra: Retrieves the WRA tarball and uploads it to the system.
- update_oidc: Sets up OIDC DEX.
- apply_security_overrides: (Subcloud only) Copies and applies the security overrides from the system controller.
- set_labels_to_hosts: Assigns host labels to controllers and workers.
- allocate_resources_and_apply: Performs resource allocations based on the provided helm overrides.
Parameters:
wra_tgz_url:
description: URL containing the path to a WRA .tar.gz file.
type: string
default: ''
oidc:
description: Whether the OIDC DEX login setup is updated.
type: boolean
default: False
storage_resource_allocations:
description: >
The list of helm overrides for storage resources. For example:
[
{
"file_url": "elasticsearch-data.yaml",
"helm_chart_name": "elasticsearch-data"
},
{
"file_url": "arbitrary-overrides.yaml",
"helm_chart_name": "placeholder"
}
]
type: list
default: []
Upgrade WRA:
The upgrade_wra workflow supports both the “update” and “upgrade” capabilities. An “update” is defined by an increase of the minor version number, from 21.12-0 to 21.12-1 for example, while an “upgrade” is defined by an increase of the major version number, like from 21.12 to 22.06.
The operation is chosen based on the provided WRA tarball, relative to the currently installed WRA version.
Note that in the case of failure in either of these operations, a configuration file inside the WRA tarball (metadata.yaml) will determine if an automatic rollback to the previously installed version occurs or not. If the rollback happens, the previous version will be installed with the “applied” status. If not, the new version will be installed with the “apply-failed” status.
These are the steps to update WRA:
- Retrieve the Wind River Studio Analytics application tarball.
- Update the application.
- Verify that the update process was successful.
These are the steps to upgrade WRA:
- Retrieve the Wind River Studio Analytics application tarball.
- Check if the installed WRA application is up to date with the latest minor version.
- Apply helm overrides.
- If upgrading a subcloud, copy the security overrides from the System Controller and apply them to the subcloud.
- Upgrade the application.
- Monitor the upgrade process.
- Run applicable post-requisites.
WRA upgrades requires the same inputs as a clean install (install_wra), updates only require the URL to the WRA tarball to be installed.
Parameters:
wra_tgz_url:
description: URL containing the path to a WRA .tar.gz file.
type: string
default: ''
oidc:
description: Whether the OIDC DEX login setup is updated.
type: boolean
default: False
storage_resource_allocations:
description: >
The list of helm overrides for storage resources. For example:
[
{
"file_url": "elasticsearch-data.yaml",
"helm_chart_name": "elasticsearch-data"
},
{
"file_url": "arbitrary-overrides.yaml",
"helm_chart_name": "placeholder"
}
]
type: list
default: []
Uninstall WRA:
The uninstall_wra workflow performs the following steps, they apply to system controllers and subclouds:
- Removes the application.
- Deletes the application after the removal is finished.
- Removes the labels from the controllers.
- Cleans up unused docker containers.
The workflow does not require any parameters.
NetApp Trident Storage Upgrade Automation (upgrade_trident)
Overview
Astra Trident deploys in Kubernetes clusters as pods and provides dynamic storage orchestration services for your Kubernetes workloads. It enables your containerized applications to quickly and easily consume persistent storage from NetApp’s broad portfolio that includes ONTAP (AFF/FAS/Select/Cloud/Amazon FSx for NetApp ONTAP, Element software (NetApp HCI/SolidFire), as well as the Azure NetApp Files service, and Cloud Volumes Service on Google Cloud.
The NetApp Trident upgrade process is integrated as a new workflow. This workflow can be broken in a series of relatively independent tasks that are called in order:
- Trident Check: Checks whether Trident is being used and also checks whether it is up to date.
- This step decides whether we should continue. Trident Health Check: Checks whether Trident is in a working state just to make sure it can be safely upgraded.
- Trident Upgrade: Reinstalls Trident to perform an upgrade. When you uninstall Trident, the Persistent Volume Claim (PVC) and Persistent Volume (PV) used by the Astra Trident deployment are not deleted. PVs that have already been provisioned will remain available while Astra Trident is offline, and Astra Trident will provision volumes for any PVCs that are created in the interim once it is back online.
- Trident Health Check: Checks whether Trident is in a working state just to make sure the upgrade worked as intended.
Parameters:
type_names:
default: ["windriver.nodes.wrcp.Storage"]
description: |
Do not change!
This workflow runs on nodes of type 'windriver.nodes.wrcp.Storage'
changing this parameter can cause unexpected behavior.
These steps are shown in the diagram below:
How to run the upgrade workflow
Upgrades can be performed via the “upgrade_trident” workflow, the user only needs to provide the target’s deployment id:
cfy executions start upgrade_trident --deployment-id <DEPLOYMENT_ID>
When a platform upgrade is performed, the newer WRCP version already comes with the newer version of tridentctl which is then used to upgrade Trident. The workflow does not come with any rollback functionality as that would require downgrading tridentctl, which might make it incompatible with the Kubernetes version being used in the system.
Certificate Audit (audit_certificates)
Summary
This workflow will scan for all certificates it can find in a WRCP system. Some of them are stored in Kubernetes and can be read using the its API, while others are only present in the system as files and have to be read using SSH.
- SSH Certificates: the workflow searches on hard coded folders and files, specified below. The specific implementation for reading these certificates is present in the windriver_wrcp/sdk/certificates_ssh.py and windriver_wrcp/tasks.py files.
/etc/ssl/private/server-cert.pem
/etc/ssl/private/registry-cert.crt
/etc/openldap/certs/openldap-cert.crt
/etc/pki/ca-trust/source/anchors/dc-adminep-root-ca.crt
/etc/ssl/private/admin-ep-cert.pem
/etc/etcd/ca.crt
/etc/etcd/etcd-client.crt
/etc/etcd/etcd-server.crt
/etc/etcd/apiserver-etcd-client.crt
/etc/ssl/private/openstack/cert.pem
/etc/ssl/private/openstack/ca-cert.pem
/opt/platform/config/*/ssl_ca/*
- K8s Certificates are stored in the cluster’s secrets, so the plugin will use the Kubernetes API to list all the secrets in the cluster and fetch the certificates inside them. The specific implementation for reading these certificates are present in the windriver_wrcp/sdk/certificates_k8s.py and windriver_wrcp/tasks.py files.
Note: that the best way to list all of the certificates in the platform is to use the show-certs.sh script. If any changes are required in this workflow to improve certificate detection, check the script’s code (/usr/local/bin/show-certs.sh in WRCP) and adapt any new changes in it to the plugin. Also note that avoiding SSH usage is encouraged.
Runtime Properties
This workflow saves and uses two keys in the node’s runtime properties, they are:
- __cert_audit: this key will hold the actual certificate data fetched from WRCP,
- __cert_summary: this key is used by the web GUI to draw the tables in the Certificate Dashboard page.
Usage
To run this workflow, simply choose the desired environment(s) and run the audit_certificates
workflow from either the environment’s page or the environments list page (bulk action).
The workflow does not require any parameters.
WRCP Provisioning (install_wrcp)
This workflow has the ability to provision new WRCP installation for servers with Hewlett Packard Enterprise (HPE) or Dell hardware. The WRCP configurations supported are AIO-SX and AIO-DX. The requirements are:
- HPE Server with full support for Redfish API (Gen9 or above) with ILO 4 (starting on 2.30) or ILO 5, or Dell Server with full support for Redfish API (Gen9 or above) with Dell iDRAC8 or above.
- A WRCP base ISO to be customized
- An instance of CFS (FileServer, CustomISO API) accessible by the target server
This workflow includes the following 4 steps:
- Custom ISO: With the files input creates a new custom WRCP ISO to be used in the installation.
- Redfish: Inserts the custom ISO in the virtual drive DVD and reboots the server.
- Bootstrap and DM: After the initial boot, starts the WRCP bootstrapping and starts the Deployment Manager
- Custom ISO cleanup: After all the steps are complete, the ISO created in the first step is deleted.
Node Types
windriver.nodes.CustomISO This is a node that represents the step that builds custom ISO using a base one together with the information input in the CIQ files. Since this is step that builds the ISO, it is supposed to be the first one for the System Install blueprint.
windriver.nodes.RedfishBMC This node is responsible for communicating with the BMC. Using the information present in the CIQ files, this step inserts the ISO generated before by CustomISO step in the Virtual Drive and then reboots the server.
windriver.nodes.controller.DM This node defines the operations that will run the initial bootstrap in WRCP, and then run the Deployment Manager. This step still uses the information contained in the CIQ files.
windriver.nodes.controller.SystemController This node will define the operations necessary for subcloud installation in a future release.
windriver.nodes.CustomISOCleaner This node deletes the iso created by the CustomISO node. It is supposed to be the last node in the blueprint.
Runtime Properties:
- installer_image: WRCP ISO image url
- client_config: It contains client properties like auth_url, region_name, username, api_key, etc.
- client_config_https: Almost the same as above, but treats https endpoints specially.
- bootstrap_config: The information necessary for WRCP bootstrapping process. E.g. ssh_config, bootstrap_values, deployment_config, etc…
- bmc_config: BMC’s access information and credentials.
- iso_config: Data necessary too find and customize the WRCP ISO.
- configure_config: Special information about the custom ISO.
- ciq_file_path: Path to find CIQ files inside CFS server.
- golden_templates_config: Path to find Golden Templates files inside the CFS server
- dm_state: Stores DM’s last state.
- bootstrap_state: Stores the current state of the bootstrapping process.
- bmc_state: Relevant information about BMC state like power state, redfish state, virtual media state, etc…
- ssh_state: Information about SSH state.
How to use
- Upload the blueprint to Conductor
- Fill up the CIQ files
- Provide valid
localhost.yaml
anddeployment-config.yaml
files and rename them, respectively, tobootstrap_values.yaml.v1.jinja2
anddeployment_config.yaml.v2.jinja2
. Place them in thegolden_configs
directory. - Upload the CIQ files and golden configs to CFS (FileServer and CustomISO API). Note that all the CIQ files should have the same extension, either xlsx or yaml.
- Upload Credentials
- Create a secret named wrcp_license
- Containing a full copy of a compatible WRCP license
- must be named wrcp_license
- Create secret named ‘global_secrets’ filling up the following fields:
# This CIQ secrets Template applies to a single National Datacenter (NDC)
site_name: "global" # (String) required, Name of the NDC site for this secrets file
registry_username: username # (String) required, username for access to docker images registry
registry_password: password # (String) required, password for access to docker images registry
# This section lists BMC secrets per server
server_list:
- service_tag: "ndc_server_1" # (String) required with quotes, value should be lower case only,e.g '1a2b3c4'
bmc_user: user1 # (String) required, username of the BMC account to use
bmc_password: password1 # (String) required, password of the BMC account to use
esxi_root_password: password # (String) required, Root password of the ESXi host
- service_tag: "ndc_server_2"
bmc_user: user1
bmc_password: password1
esxi_root_password: password
- service_tag: "ndc_server_3"
bmc_user: user1
bmc_password: password1
esxi_root_password: password
- Create a secret ‘CUSTOM_NAME_secrets’ filling up the following fields:
- Where CUSTOM_NAME is the same name you have for yaml filename in the ciq_file_list blueprint input. Using the example ciq_file_list below, it should be wr-duplex-1_2_secrets.
# This CIQ secrets Template applies to a single Regional Datacenter (RDC)
# RDC CIQ schema: schemas/regional_datacenter_ciq_secrets.spec.v1.json
site_name: wr-duplex-1 # (String) required, Name of the RDC site for this secrets file
server_list:
- service_tag: '' # (String) required with quotes, value should be lower case only,e.g '1a2b3c4'
bmc_user: user # (String) required, username of the BMC account to use
bmc_password: password # (String) required, password of the BMC account to use
- service_tag: ''
bmc_user: user
bmc_password: password
initial_sysadmin_password: syspwd* # (String) required, the initial password of controller-0 of this cell site
- Create Deployment While filling up the inputs, special attention to the next two fields.
Example of yaml based deployment:
ciq_file_list:
[
{
"file_name": "wr-duplex-1_2_ndc.yaml",
"file_type": "NDC"
},
{
"file_name": "wr-duplex-1_2.yaml",
"file_type": "RDC"
}
]
golden_config_file_list:
[
{
"file_name":"install_values.yaml.v1.jinja2",
"file_type":"RDCinstallValues"
},
{
"file_name":"bootstrap_values.yaml.v1.jinja2",
"file_type":"RDCboostrapValues"
},
{
"file_name":"deployment_config.yaml.v2.jinja2",
"file_type":"RDCdeploymentConfig"
},
{
"file_name":"server_hwprofile_std_config.yaml.v1.jinja2",
"file_type":"ServerHwProfileStd"
},
{
"file_name":"server_hwprofile_lowlatency_config.yaml.v1.jinja2",
"file_type":"ServerHwProfileLowlatency"
},
{
"file_name":"fqdd_map_rules.yaml.v1.jinja2",
"file_type":"FqddMapRules"
}
]
Example of xlsx based deployment:
ciq_file_list:
[
{
"file_name": "wr-duplex-1_2_ndc.xlsx,
"file_type": "NDC"
},
{
"file_name": "wr-duplex-1_2_rdc.xlsx",
"file_type": "RDC"
},
{
"file_name": "wr-duplex-1_2_external.xlsx",
"file_type": "EXTERNAL"
}
]
golden_config_file_list:
[
{
"file_name":"bootstrap_values.yaml.v1.jinja2",
"file_type":"RDCboostrapValues"
},
{
"file_name":"deployment_config.yaml.v2.jinja2",
"file_type":"RDCdeploymentConfig"
},
{
"file_name":"install_values.yaml.v2.jinja2",
"file_type":"RDCinstallValues"
}
]
- Then, run “install_wrcp” workflow.
Cloud Service Archive (CSAR) Operations
This feature allows the user to install and uninstall Cloud Service Archive (CSAR) files for specific vendors defined by Wind River. The two new workflows are:
Installing CSAR
The workflow install_csar
handles the installation of the file. The CSAR is retrieved from files previously copied into WRCP or public URLs. Basic verifications are done to check the available size of the target folder (/scratch) where the CSAR file is going to be extracted. If there is not enough space in the target folder, then the installation is aborted and an error is reported to the user. The workflow is optimized to save space in the target folder, but as the CSAR files might be very large, it is recommended to size the target folder based on the file size.
When the file is available in the target folder, the CSAR vendor is checked. If it is a valid vendor (specified by Wind River), them the package is extracted, imported into critools
and installed in the configured namespace
. The target folder them is cleaned after the installation.
Use the following steps to install CSAR:
- Copy the CSAR file into the WRCP or provide an URL
- Define the
namespace
where the Pod will be deployed - Run the workflow
Parameters:
parameters:
csar_url:
description: >
URL to CSAR package, or location inside wrcp.
- csar_url: file:///scratch/package.zip
- csar_url: https://example.com/package.zip
type: string
namespace:
description: >
Namespace used for k8s installation. If not specified, the namespace csar will be used.
default: csar
type: string
required: True
uninstall_csar
With the workflow uninstall_csar
the user is also able to uninstall the CSAR Pods.
Use the following steps to uninstall CSAR:
- Inform the
namespace
where the Pod is deployed - Run the workflow
- Check the
delete_namespace
if you want to remove the namespace at the end of the uninstall.
Parameters:
parameters:
namespace:
description: >
Namespace used for k8s installation. If not specified, the namespace csar will be used.
default: csar
type: string
required: True
delete_namespace:
description: If toggled, will delete the namespace at the end of the uninstall.
default: False
type: boolean
required: True