Conductor Documentation

WRCP Plugin

Introduction

WRCP Plugin enables users to orchestrate WRCP workflows from the manager.

NOTE:

WRCP Management

Each item on the following list is materialized as a workflow in the plugin.

Prerequisites

All stories require that you provide secrets for the following values:

You are free to change the names of the secrets, however, you must provide the secret names in the deployment inputs when enrolling a new system.

Node Types

cloudify.nodes.WRCP.WRCP This node represents a WRCP System. A system can be a System Controller, Standalone system, or a Subcloud.

Properties

Runtime Properties:

Labels

The following are possible for a WRCP system deployment:

Sites

During installation the plugin requests a system’s location (latitude and longitude) from the WRCP API, and creates a Site in Conductor. You can see sites’ location on the map on the Dashboard screen.

check_update_status

Checking status of strategy steps and update labels if failed/complete.

Parameters:

      type_names:
        default: [ ]
        description: |
          Type names for which the workflow will execute the update
          check_update_status operation.
          By default the operation will execute for nodes of type
          cloudify.nodes.WRCP.WRCP
      node_ids:
        default: [ ]
        description: |
          Node template IDs for which the workflow will execute the
          check_update_status operation
      node_instance_ids:
        default: [ ]
        description: |
          Node instance IDs for which the workflow will execute the
          check_update_status operation

refresh_status

Workflow starts check_update_status on each subcloud.

Parameters:

      type_names:
        default: []
        description: |
          Type names for which the workflow will execute the refresh_status operation.
          By default the operation will execute for nodes of type
          cloudify.nodes.WRCP.WRCP
      node_ids:
        default: []
        description: |
          Node template IDs for which the workflow will execute the refresh_status
          operation
      node_instance_ids:
        default: []
        description: |
          Node instance IDs for which the workflow will execute the refresh_status
          operation

prestage

ATTENTION: Make sure that every path is able to be accesible through USM, SSH or simple request.

The prestage workflow aims to prepare the target system for an upgrade/update. It’s part of the prestage to load the software, which can be the ISO or a patch and to perform any other required step prior to the actual upgrade/update of the system.

Before running prestage workflow on subclouds make sure that upgrade was executed over the system controller due the fact that no subcloud can have a version number superior than the system windriver.nodes.controller. There’s a validation step on prestage workflow to assure the upload of dependency images (those needed for subclouds) if it’s a system controller.

This workflow comes to wrap all operations needed for preparing for an upgrade or upload that was previously happening on the upgrade workflow.

The workflow is also designed to handle the prestage of minor version updates (apply patches):

Parameters:

      software_version:
        type: string
        default: ""
        description: |
          The software version to which will be updated or upgraded
        constraints:
          - pattern: '^\d+.\d+.\d+$'
      type_names:
        default: []
        description: |
          Type names for which the workflow will execute the prestaging operation.
          By default the operation will execute for nodes of type
          cloudify.nodes.wrcp.WRCP
      node_ids:
        default: []
        description: |
          Node template IDs for which the workflow will execute the prestaging
          operation
      node_instance_ids:
        default: []
        description: |
          Node instance IDs for which the workflow will execute the prestaging
          operation
      license_file_path:
        type: string
        default: ''
        description: |
          File path where the license file is located.
          This license file will be applied as a part of upgrade process.
          E.g: /opt/mgmtworker/persistent/25.03/license.lic
          cfy_user must be able to access this file.
      iso_path:
        type: string
        default: ''
        description: |
          File path where the ISO with new SW version is located in the manager host.
          E.g.: /opt/mgmtworker/25.03/bootimage.iso
          Or the URL to the ISO image file with the new SW version. E.g.:
          https://build.server:/loadbuild/25.03/bootimage.iso
      sig_path:
        type: string
        default: ''
        description: |
          File path where the ISO signature is located in the manager host.
          E.g.: /opt/mgmtworker/25.03/bootimage.sig
          Or the URL to the ISO image signature file. E.g.:
          https://build.server:/loadbuild/25.03/bootimage.sig
      prestage_images:
        type: string
        default: ''
        description: |
          File path where the list of images to be prestaged is located.
          E.g.: /opt/mgmtworker/25.03/image_list.lst
          cfy_user must be able to access this file.
      subcloud_upgrade:
        type: boolean
        default: True
        description: |
          Flag to indicate if subclouds should be upgraded.
      patch_dir:
        default: ''
        description: |
          The path to a directory on the manager where the patches are located.
          The patches will be uploaded and applied.
      max_retries:
        description: |
          The maximum number of retries allowed for operations.
        type: integer
        default: 60

Upgrade

IMPORTANT: Before start workflow make sure that below files don’t exist on controller-0. These will be automatically copied from the load installed during the upgrade.

The subclouds have to been provisioned with the Redfish BMC password. If not, before start upgrade you need to update subcloud with install-values.yaml

The prestage workflow needs to be executed prior to the upgrade.

Workflow to upgrade WRCP system components. It contains steps:

upgrade platform:

upgrade storage:

upgrade kubernetes:

[1] When you uninstall Trident, the Persistent Volume Claim (PVC) and Persistent Volume (PV) used by the Astra Trident deployment are not deleted. PVs that have already been provisioned will remain available while Astra Trident is offline, and Astra Trident will provision volumes for any PVCs that are created in the interim once it is back online.

Parameters:

      type_names:
        default: []
        description: |
          Type names for which the workflow will execute the upgrade operation.
          By default the operation will execute for nodes of a type that
          implements the `cloudify.interfaces.upgrade` interface.
      node_ids:
        default: []
        description: |
          Node template IDs for which the workflow will execute the upgrade
          operation
      node_instance_ids:
        default: []
        description: |
          Node instance IDs for which the workflow will execute the upgrade
          operation
      sw_version:
        default: ''
        description: |
          SW version to upgrade to. Example: 21.12
      kubernetes_version:
        default: ''
        description: |
          Kubernetes version to upgrade to. Example: 1.21.8
      force_flag:
        default: True
        description: |
          Force upgrade to run. Required if the workflow needs to run while
          there are active alarms.
      controller_apply_type:
        default: 'serial'
        description: |
          The apply type for controller hosts: serial or ignore.
      storage_apply_type:
        description: |
          The apply type for storage hosts: serial, parallel or ignore.
        default: 'serial'
      swift_apply_type:
        description: |
          The apply type for swift hosts: serial, parallel or ignore.
        default: 'serial'
      worker_apply_type:
        description: |
          The apply type for worker hosts: serial, parallel or ignore.
        default: 'serial'
      max_parallel_worker_hosts:
        description: |
          The maximum number of worker hosts to patch in parallel; only applicable if worker-apply-type = parallel. Default value is 2.
        default: 2
      default_instance_action:
        description: |
          The default instance action: stop-start or migrate.
        default: 'migrate'
      alarm_restrictions:
        description: |
          The strictness of alarm checks: strict or relaxed.
        default: 'strict'
      max_retries:
        description: |
          The maximum number of retries allowed for operations.
        type: integer
        default: 60

Running upgrade workflow on selected components

The upgrade workflow runs on any node which type implements the cloudify.interfaces.upgrade interface.

Node Types

windriver.nodes.wrcp.Infrastructure This node represents the WRCP platform. For the upgrade workflow, this node is equivalent to the previous versions of cloudify.nodes.WRCP.WRCP. The cloudify.nodes.WRCP.WRCP node will not be able to run the “upgrade” workflows anymore.

windriver.nodes.wrcp.Storage This node represents K8s storage.

windriver.nodes.wrcp.Kubernetes This node represents K8s cluster.

Note: These new node types were introduced to allow the upgrade of WRCP components in sequence respecting inter-component dependencies.

By default, the workflow will perform the upgrade following the order in this list. The user can choose which components to upgrade by informing the target note types in type_names execution parameter. For example, to run the platform upgrade only, the input must be:

type_names: [“windriver.nodes.wrcp.Infrastructure”]

To run storage and kubernetes upgrades only:

type_names: [“windriver.nodes.wrcp.Storage”, “windriver.nodes.wrcp.Kubernetes”]

run_on_subclouds

The run_on_subclouds workflow creates temporary execution group based on provided parameters, start batch execution on created group (run workflow with name workflow_id on all matched deployments) and after finish the workflow (all executions are completed) delete this group.

Currently, the workflow is looking for match in all deployments (including environment and not related deployment) so the user must use labels and filters carefully.

Parameters:

      workflow_id:
        description: |
          The name of the workflow to execute on all matched deployments (deployments group)
        default: ''
      workflow_inputs:
        description: The workflow parameters required during workflow execution
        default: {}
      labels:
        description: The labels on the basis of which deployments are selected to create
          a deployment group and start batch execution of workflow on each matched subcloud.
          The labels are optional and can be provided parallel with filter_ids
        default: {}
      filter_ids:
        description: |
          List of filter ids on the basis of which deployments are selected to create
          a deployment group and start batch execution of workflow on each matched subcloud.
          The filter_ids can be provided parallel with labels.
          The provided id must be really id of filter
        default: []
      run_on_all_subclouds:
        type: boolean
        description: |
          You can run a workflow on all sub environments.
          When the parameters is True, the deployments will be selected based on rule:
          {"csys-obj-parent": "<parent_deployment_id>"}
          In this case, labels rules and filter_ids will be skipped
        default: False
      type_names:
        default: [ ]
        description: |
          Type names for which the workflow will execute the workflow,
          especially start_subclouds_executions and
          wait_for_execution_end_and_delete_deployment_group operation.
          By default the operation will execute for nodes of type
          cloudify.nodes.WRCP.WRCP
      node_ids:
        default: [ ]
        description: |
          Node template IDs for which the workflow will execute the workflow,
          especially start_subclouds_executions and
          wait_for_execution_end_and_delete_deployment_group operation.
      node_instance_ids:
        default: [ ]
        description: |
          Node instance IDs for which the workflow will execute the workflow,
          especially start_subclouds_executions and
          wait_for_execution_end_and_delete_deployment_group operation.

Kubernetes Cluster Upgrade Automation (upgrade_kubernetes)

Overview

The Kubernetes upgrade is integrated as a new workflow in the WRCP plugin, this workflow can be broken in a series of tasks that need to performed in order:

Parameters:

      type_names:
        default: ["windriver.nodes.wrcp.Kubernetes"]
        description: |
          Do not change!
          This workflow runs on nodes of 'type windriver.nodes.wrcp.Kubernetes'
          changing this parameter can cause unexpected behavior.
      kubernetes_version:
        description: 'Specify a target version for Kubernetes Orchestration. Example: v1.21.8'
        type: string
        default: ''
        constraints:
          - pattern: '^\d+.\d+.\d+$'
      worker_apply_type:
        description: >
          This option specifies the host concurrency of the Kubernetes version upgrade strategy.
          --SERIAL:worker hosts will be patched one at a time;
          --PARALLEL:worker hosts will be upgraded in parallel;
          --IGNORE:worker hosts will not be upgraded; strategy create will fail
        default: serial
        constraints:
          - valid_values:
            - serial
            - parallel
            - ignore
      max_parallel_worker_hosts:
        description: 'This option applies to the parallel worker apply type selection to specify the maximum worker hosts to upgrade in parallel'
        default: 2
        constraints:
          - in_range:
            - 2
            - 10
      default_instance_action:
        description: >
          This option only has significance when the wr-openstack application is loaded and there are instances running on worker hosts.
          It specifies how the strategy deals with worker host instances over the strategy execution.
            --STOP-START:Instances will be stopped before the host lock operation following the upgrade and then started again following the host unlock;
            --MIGRATE:Instances will be stopped before the host lock operation following the upgrade and then started again following the host unlock.
        default: stop-start
        constraints:
          - valid_values:
            - stop-start
            - migrate
      alarm_restrictions:
        description: >
          This option sets how the Kubernetes version upgrade orchestration behaves when alarms are present.
            --STRICT:some basic alarms are ignored;
            --RELAXED:Non-management-affecting alarms are ignored.
        default: strict
        constraints:
          - valid_values:
            - relaxed
            - strict

Note: most of these parameters are directly inserted into WRCP commands, so additional details about them can be found in the platform documentation.

Analytics Deployment Automation (install_wra, upgrade_wra, uninstall_wra)

Install WRA:

The install_wra workflow performs the following steps:

Parameters:

      wra_tgz_url:
        description: URL containing the path to a WRA .tar.gz file.
        type: string
        default: ''
      oidc:
        description: Whether the OIDC DEX login setup is updated.
        type: boolean
        default: False
      storage_resource_allocations:
        description: >
          The list of helm overrides for storage resources. For example:
          [
            {
                "file_url": "elasticsearch-data.yaml",
                "helm_chart_name": "elasticsearch-data"
            },
            {
                "file_url": "arbitrary-overrides.yaml",
                "helm_chart_name": "placeholder"
            }
          ]
        type: list
        default: []

Upgrade WRA:

The upgrade_wra workflow supports both the “update” and “upgrade” capabilities. An “update” is defined by an increase of the minor version number, from 21.12-0 to 21.12-1 for example, while an “upgrade” is defined by an increase of the major version number, like from 21.12 to 22.06.

The operation is chosen based on the provided WRA tarball, relative to the currently installed WRA version.

Note that in the case of failure in either of these operations, a configuration file inside the WRA tarball (metadata.yaml) will determine if an automatic rollback to the previously installed version occurs or not. If the rollback happens, the previous version will be installed with the “applied” status. If not, the new version will be installed with the “apply-failed” status.

These are the steps to update WRA:

These are the steps to upgrade WRA:

WRA upgrades requires the same inputs as a clean install (install_wra), updates only require the URL to the WRA tarball to be installed.

Parameters:

      wra_tgz_url:
        description: URL containing the path to a WRA .tar.gz file.
        type: string
        default: ''
      oidc:
        description: Whether the OIDC DEX login setup is updated.
        type: boolean
        default: False
      storage_resource_allocations:
        description: >
          The list of helm overrides for storage resources. For example:
          [
            {
                "file_url": "elasticsearch-data.yaml",
                "helm_chart_name": "elasticsearch-data"
            },
            {
                "file_url": "arbitrary-overrides.yaml",
                "helm_chart_name": "placeholder"
            }
          ]
        type: list
        default: []

Uninstall WRA:

The uninstall_wra workflow performs the following steps, they apply to system controllers and subclouds:

The workflow does not require any parameters.

NetApp Trident Storage Upgrade Automation (upgrade_trident)

Overview

Astra Trident deploys in Kubernetes clusters as pods and provides dynamic storage orchestration services for your Kubernetes workloads. It enables your containerized applications to quickly and easily consume persistent storage from NetApp’s broad portfolio that includes ONTAP (AFF/FAS/Select/Cloud/Amazon FSx for NetApp ONTAP, Element software (NetApp HCI/SolidFire), as well as the Azure NetApp Files service, and Cloud Volumes Service on Google Cloud.

The NetApp Trident upgrade process is integrated as a new workflow. This workflow can be broken in a series of relatively independent tasks that are called in order:

  1. Trident Check: Checks whether Trident is being used and also checks whether it is up to date.
  2. This step decides whether we should continue. Trident Health Check: Checks whether Trident is in a working state just to make sure it can be safely upgraded.
  3. Trident Upgrade: Reinstalls Trident to perform an upgrade. When you uninstall Trident, the Persistent Volume Claim (PVC) and Persistent Volume (PV) used by the Astra Trident deployment are not deleted. PVs that have already been provisioned will remain available while Astra Trident is offline, and Astra Trident will provision volumes for any PVCs that are created in the interim once it is back online.
  4. Trident Health Check: Checks whether Trident is in a working state just to make sure the upgrade worked as intended.

Parameters:

      type_names:
        default: ["windriver.nodes.wrcp.Storage"]
        description: |
          Do not change!
          This workflow runs on nodes of type 'windriver.nodes.wrcp.Storage'
          changing this parameter can cause unexpected behavior.

These steps are shown in the diagram below:

NetApp Trident Update

How to run the upgrade workflow

Upgrades can be performed via the “upgrade_trident” workflow, the user only needs to provide the target’s deployment id:

cfy executions start upgrade_trident --deployment-id <DEPLOYMENT_ID> 

When a platform upgrade is performed, the newer WRCP version already comes with the newer version of tridentctl which is then used to upgrade Trident. The workflow does not come with any rollback functionality as that would require downgrading tridentctl, which might make it incompatible with the Kubernetes version being used in the system.

Certificate Audit (audit_certificates)

Summary

This workflow will scan for all certificates it can find in a WRCP system. Some of them are stored in Kubernetes and can be read using the its API, while others are only present in the system as files and have to be read using SSH.

  /etc/ssl/private/server-cert.pem
  /etc/ssl/private/registry-cert.crt
  /etc/openldap/certs/openldap-cert.crt
  /etc/pki/ca-trust/source/anchors/dc-adminep-root-ca.crt
  /etc/ssl/private/admin-ep-cert.pem
  /etc/etcd/ca.crt
  /etc/etcd/etcd-client.crt
  /etc/etcd/etcd-server.crt
  /etc/etcd/apiserver-etcd-client.crt
  /etc/ssl/private/openstack/cert.pem
  /etc/ssl/private/openstack/ca-cert.pem
  /opt/platform/config/*/ssl_ca/*

Note: that the best way to list all of the certificates in the platform is to use the show-certs.sh script. If any changes are required in this workflow to improve certificate detection, check the script’s code (/usr/local/bin/show-certs.sh in WRCP) and adapt any new changes in it to the plugin. Also note that avoiding SSH usage is encouraged.

Runtime Properties

This workflow saves and uses two keys in the node’s runtime properties, they are:

Usage

To run this workflow, simply choose the desired environment(s) and run the audit_certificates workflow from either the environment’s page or the environments list page (bulk action).

The workflow does not require any parameters.

WRCP Provisioning (install_wrcp)

This workflow has the ability to provision new WRCP installation for servers with Hewlett Packard Enterprise (HPE) or Dell hardware. The WRCP configurations supported are AIO-SX and AIO-DX. The requirements are:

This workflow includes the following 4 steps:

  1. Custom ISO: With the files input creates a new custom WRCP ISO to be used in the installation.
  2. Redfish: Inserts the custom ISO in the virtual drive DVD and reboots the server.
  3. Bootstrap and DM: After the initial boot, starts the WRCP bootstrapping and starts the Deployment Manager
  4. Custom ISO cleanup: After all the steps are complete, the ISO created in the first step is deleted.

Node Types

windriver.nodes.CustomISO This is a node that represents the step that builds custom ISO using a base one together with the information input in the CIQ files. Since this is step that builds the ISO, it is supposed to be the first one for the System Install blueprint.

windriver.nodes.RedfishBMC This node is responsible for communicating with the BMC. Using the information present in the CIQ files, this step inserts the ISO generated before by CustomISO step in the Virtual Drive and then reboots the server.

windriver.nodes.controller.DM This node defines the operations that will run the initial bootstrap in WRCP, and then run the Deployment Manager. This step still uses the information contained in the CIQ files.

windriver.nodes.controller.SystemController This node will define the operations necessary for subcloud installation in a future release.

windriver.nodes.CustomISOCleaner This node deletes the iso created by the CustomISO node. It is supposed to be the last node in the blueprint.

Runtime Properties:

How to use

  1. Upload the blueprint to Conductor
  2. Fill up the CIQ files
  3. Provide valid localhost.yaml and deployment-config.yaml files and rename them, respectively, to bootstrap_values.yaml.v1.jinja2 and deployment_config.yaml.v2.jinja2. Place them in the golden_configs directory.
  4. Upload the CIQ files and golden configs to CFS (FileServer and CustomISO API). Note that all the CIQ files should have the same extension, either xlsx or yaml.
  5. Upload Credentials
# This CIQ secrets Template applies to a single National Datacenter (NDC)
site_name: "global"               # (String) required, Name of the NDC site for this secrets file

registry_username: username       # (String) required, username for access to docker images registry
registry_password: password       # (String) required, password for access to docker images registry

# This section lists BMC secrets per server
server_list:
- service_tag: "ndc_server_1"     # (String) required with quotes, value should be lower case only,e.g '1a2b3c4'
    bmc_user: user1               # (String) required, username of the BMC account to use
    bmc_password: password1       # (String) required, password of the BMC account to use
    esxi_root_password: password  # (String) required, Root password of the ESXi host

- service_tag: "ndc_server_2"
    bmc_user: user1
    bmc_password: password1
    esxi_root_password: password

- service_tag: "ndc_server_3"
    bmc_user: user1
    bmc_password: password1
    esxi_root_password: password
# This CIQ secrets Template applies to a single Regional Datacenter (RDC)
# RDC CIQ schema: schemas/regional_datacenter_ciq_secrets.spec.v1.json

site_name: wr-duplex-1      # (String) required, Name of the RDC site for this secrets file

server_list:
- service_tag: ''                   # (String) required with quotes, value should be lower case only,e.g '1a2b3c4'
    bmc_user: user                    # (String) required, username of the BMC account to use
    bmc_password: password            # (String) required, password of the BMC account to use

- service_tag: ''
    bmc_user: user
    bmc_password: password

initial_sysadmin_password: syspwd*  # (String) required, the initial password of controller-0 of this cell site
  1. Create Deployment While filling up the inputs, special attention to the next two fields.

Example of yaml based deployment:

ciq_file_list:
 [
     {
         "file_name": "wr-duplex-1_2_ndc.yaml",    
         "file_type": "NDC"
     },
     {
         "file_name": "wr-duplex-1_2.yaml",
         "file_type": "RDC"
     }
 ]
 golden_config_file_list:
 [
     {
         "file_name":"install_values.yaml.v1.jinja2",
         "file_type":"RDCinstallValues"
     },
     {
         "file_name":"bootstrap_values.yaml.v1.jinja2",
         "file_type":"RDCboostrapValues"
     },
     {
         "file_name":"deployment_config.yaml.v2.jinja2",
         "file_type":"RDCdeploymentConfig"
     },
     {
         "file_name":"server_hwprofile_std_config.yaml.v1.jinja2",
         "file_type":"ServerHwProfileStd"
     },
     {
         "file_name":"server_hwprofile_lowlatency_config.yaml.v1.jinja2",
         "file_type":"ServerHwProfileLowlatency"
     },
     {
         "file_name":"fqdd_map_rules.yaml.v1.jinja2",
         "file_type":"FqddMapRules"
     }
 ]

Example of xlsx based deployment:

ciq_file_list: 
 [ 
     { 
         "file_name": "wr-duplex-1_2_ndc.xlsx,     
         "file_type": "NDC" 
     }, 
     { 
         "file_name": "wr-duplex-1_2_rdc.xlsx", 
         "file_type": "RDC" 
     }, 
     { 
         "file_name": "wr-duplex-1_2_external.xlsx", 
         "file_type": "EXTERNAL" 
     } 
 ] 
 
golden_config_file_list: 
 [ 
     { 
         "file_name":"bootstrap_values.yaml.v1.jinja2", 
         "file_type":"RDCboostrapValues" 
     }, 
     { 
         "file_name":"deployment_config.yaml.v2.jinja2", 
         "file_type":"RDCdeploymentConfig" 
     }, 
     { 
         "file_name":"install_values.yaml.v2.jinja2", 
         "file_type":"RDCinstallValues" 
     } 
 ]

  1. Then, run “install_wrcp” workflow.

Cloud Service Archive (CSAR) Operations

This feature allows the user to install and uninstall Cloud Service Archive (CSAR) files for specific vendors defined by Wind River. The two new workflows are:

Installing CSAR

The workflow install_csar handles the installation of the file. The CSAR is retrieved from files previously copied into WRCP or public URLs. Basic verifications are done to check the available size of the target folder (/scratch) where the CSAR file is going to be extracted. If there is not enough space in the target folder, then the installation is aborted and an error is reported to the user. The workflow is optimized to save space in the target folder, but as the CSAR files might be very large, it is recommended to size the target folder based on the file size. When the file is available in the target folder, the CSAR vendor is checked. If it is a valid vendor (specified by Wind River), them the package is extracted, imported into critools and installed in the configured namespace. The target folder them is cleaned after the installation.

Use the following steps to install CSAR:

  1. Copy the CSAR file into the WRCP or provide an URL
  2. Define the namespace where the Pod will be deployed
  3. Run the workflow

Parameters:

    parameters:
      csar_url:
        description: >
          URL to CSAR package, or location inside wrcp.
          - csar_url: file:///scratch/package.zip
          - csar_url: https://example.com/package.zip
        type: string

      namespace:
        description: >
          Namespace used for k8s installation. If not specified, the namespace csar will be used.
        default: csar
        type: string
        required: True

uninstall_csar

With the workflow uninstall_csar the user is also able to uninstall the CSAR Pods.

Use the following steps to uninstall CSAR:

  1. Inform the namespace where the Pod is deployed
  2. Run the workflow
  3. Check the delete_namespace if you want to remove the namespace at the end of the uninstall.

Parameters:

    parameters:
      namespace:
        description: >
          Namespace used for k8s installation. If not specified, the namespace csar will be used.
        default: csar
        type: string
        required: True
      delete_namespace:
        description: If toggled, will delete the namespace at the end of the uninstall.
        default: False
        type: boolean
        required: True