Conductor Documentation

Terraform Plugin

Overview

The Terraform plugin enables you to do the following tasks from Studio Conductor by using its node_types inside your blueprints:

  1. Handle Terraform binary Installation/Uninstallation [if you are not using external system setup] along side its plugins.
  2. Manage Terraform modules and sources lifecycle [ init, plan, apply, refresh, state, import, outputs, destroy ]
  3. Supports running linters and security checks [ tfsec, tflint, terratag ]
  4. Supports cost estimation for the source via [ infracost ]

and the plugin does the managment of the state by storing the state in runtime and a file inside the deployment directory.

Requirements

Node Types

cloudify.nodes.terraform

This is the base node type, which represents a Terraform installation.

Properties

Example

In the following example, we deploy a Terraform installation, the Terraform executable saved under the deployment directory:

  inputs:
    terraform_plugins:
      default:
        registry.terraform.io/hashicorp/azurerm/2.52.0/linux_amd64/: 'https://releases.hashicorp.com/terraform-provider-azurerm/2.52.0/terraform-provider-azurerm_2.52.0_linux_amd64.zip'

  node_templates:
    terraform:
      type: cloudify.nodes.terraform
      properties:
        resource_config:
          plugins: { get_input: terraform_plugins }

cloudify.nodes.terraform.Module

This refers to a Terraform module.

Properties

OPA

OPA support was introduced in version 0.19.14 of the Terraform plugin.

The terraform.opa interface operation evaluates an Open Policy Agent (OPA) decision against a Terraform plan. Calling this interface operation will initialize Terraform (if it has not already been initialized), generate a Terraform plan, and then evaluate the decision against the provided OPA policies.

The operation provides a thing wrapper around running opa exec against the Terraform plan in JSON format.

OPA is configured by setting the desired parameters in cloudify.nodes.terraform.Module:properties.opa_config:

A policy bundle is a ZIP archive that can be passed to the --bundle flag for opa exec. To create a policy bundle in the format required by Conductor, simply zip up the contents of an OPA directory containing one or more Rego files. For example:

$ ls
main.rego  security_groups.rego

$ zip -r policy.zip *
  adding: main.rego (deflated 21%)
  adding: security_groups.rego (deflated 47%)

The policy_bundles parameter accepts a list of bundles in the same format used by the source parameter for the Terraform module. Each policy bundle must have a name, which is used to name the directory on the Conductor Manager when the bundle is extracted.

The example below shows a single policy bundle named my-policy. This bundle is located in resources/policy.zip, which is within the blueprint archive:

  module:
    type: cloudify.nodes.terraform.Module
    properties:
      opa_config:
        policy_bundles:
          - name: my-policy
            location: resources/policy.zip
    relationships:
      - target: terraform
        type: cloudify.terraform.relationships.run_on_host

The terraform.opa operation also requires that the decision parameter be se. See the “Operations” section below for more information.

Operations

For more information about the import command , you can refer to this documentation link

Runtime Properties

Example

In the following example we deploy a Terraform plan:

  cloud_resources:
    type: cloudify.nodes.terraform.Module
    properties:
      resource_config:
        source:
          location: https://github.com/cloudify-community/blueprint-examples/archive/master.zip
        source_path: virtual-machine/resources/terraform/template
        variables:
          access_key: { get_secret: aws_access_key_id }
          secret_key: { get_secret: aws_secret_access_key }
          aws_region: { get_input: aws_region_name }
          aws_zone: { get_input: aws_zone_name }
          admin_user: { get_input: agent_user }
          admin_key_public: { get_attribute: [agent_key, public_key_export] }
      tflint_config:
        installation_source: https://github.com/terraform-linters/tflint/releases/download/v0.34.1/tflint_linux_amd64.zip
        config:
          - type_name: config
            option_value:
              module: "true"
          - type_name: plugin
            option_name: aws
            option_value:
              enabled: "true"
          - type_name: rule
            option_name: terraform_unused_declarations
            option_value:
              enabled: "true"
    relationships:
      - target: terraform
        type: cloudify.terraform.relationships.run_on_host

Relationships

Workflows

refresh_terraform_resources

The refresh_terraform_resources workflow pulls the remote state and updates the cloudify.nodes.terraform.Module node instance resources runtime property with the remote state.

To execute refresh terraform resources workflow on node instances of a specific node template:

Example command:

$ cfy executions start refresh_terraform_resources -d tf -p node_instance_ids=cloud_resources_j9l2y3
2021-10-10 16:24:32.278  CFY <tf> Starting 'refresh_terraform_resources' workflow execution
Executing workflow `refresh_terraform_resources` on deployment `tf` [timeout=900 seconds]

terraform_plan

The Terraform plan workflow enables to you run the Terraform plan command against your Terraform module and to store the results in the node instances’ plan and plain_text_plan runtime properties.

NOTE: Remember that if your Terraform module depends on runtime data, then that data must exist. For example, if it requires a zip file created by a different node template, then the Terraform plan cannot run unless the zip node has already been installed. For this reason, the terraform_plan workflow is executed primarily for day two operations (after install).

Parameters

Example command:

# list the node instances in a deployment:
$ cfy node-inst list -d tf
Listing instances for deployment tf...

Node-instances:
+------------------------+---------------+---------+-----------------+---------+------------+----------------+------------+
|           id           | deployment_id | host_id |     node_id     |  state  | visibility |  tenant_name   | created_by |
+------------------------+---------------+---------+-----------------+---------+------------+----------------+------------+
|    agent_key_cp18tq    |       tf      |         |    agent_key    | started |   tenant   | default_tenant |   admin    |
| cloud_resources_j9l2y3 |       tf      |         | cloud_resources | started |   tenant   | default_tenant |   admin    |
|    terraform_p4e4zy    |       tf      |         |    terraform    | started |   tenant   | default_tenant |   admin    |
+------------------------+---------------+---------+-----------------+---------+------------+----------------+------------+


# Execute the workflow for the cloud resources node instance:
$ cfy exec start terraform_plan -d tf -p node_instance_ids=cloud_resources_j9l2y3
Executing workflow `terraform_plan` on deployment `tf` [timeout=900 seconds]
2021-10-10 16:18:30.155  CFY <tf> Starting 'terraform_plan' workflow execution...


# Execute the workflow for a new source path (different module in the same zip.
$ cfy exec start terraform_plan -d tf -p node_instance_ids=cloud_resources_j9l2y3 -p source_path=template/modules/private_vm
Executing workflow `terraform_plan` on deployment `tf` [timeout=900 seconds]
2021-10-10 16:21:03.689  CFY <tf> Starting 'terraform_plan' workflow execution

reload_terraform_template

The reload_terraform_template workflow updates the remote state with new changes in source and/or source_path, or attempts resets the remote state to the original state if source or source_path are not provided.

To execute refresh terraform resources workflow on node instances of a specific node template:

Example command:

To execute terraform reload operation:

$ cfy executions start reload_terraform_template -d tf -p node_instance_ids=cloud_resources_j9l2y3 -p source_path=template/modules/private_vm
Executing workflow `reload_terraform_template` on deployment `tf` [timeout=900 seconds]
2021-10-10 16:30:34.523  CFY <tf> Starting 'reload_terraform_template' workflow execution

update_terraform_binary

The update_terraform_binary workflow executes delete and create operations respectively on the cloudify.nodes.terraform node instance.

To execute update terraform binary workflow on node instances of a specific node template:

Example command:

$ cfy executions start update_terraform_binary -d tf -p node_instance_ids=terraform_j2g1y2 -p installation_source='https://releases.hashicorp.com/terraform/1.3.4/terraform_1.3.4_linux_amd64.zip'
2021-10-10 16:24:32.278  CFY <tf> Starting 'update_terraform_binary' workflow execution
Executing workflow `update_terraform_binary` on deployment `tf` [timeout=900 seconds]

import_terraform_resource

The import_terraform_resource workflow import the remote resources with new changes in source and/or source_path, or uses the original values if source or source_path are not provided.

To execute import terraform resource workflow on node instances of a specific node template:

Example command:

To execute terraform import operation:

$ cfy executions start import_terraform_resource -d tf -p node_instance_ids=cloud_resources_j9l2y3 -p source_path=template/modules/private_vm -p resource_address=aws_instance.example_vm -p resource_id=i-0be712xxxxb437
Executing workflow `import_terraform_resource` on deployment `tf` [timeout=900 seconds]
2021-10-10 16:30:34.523  CFY <tf> Starting 'import_terraform_resource' workflow execution

run_infracost

The run_infracost workflow updates the remote state with new changes in source and/or source_path, or attempts resets the remote state to the original state if source or source_path are not provided.

Example command:

To execute infracost on Modules:

$ cfy executions start run_infracost -d tf
Executing workflow `run_infracost` on deployment `tf` [timeout=900 seconds]
2021-10-10 16:30:34.523  CFY <tf> Starting 'run_infracost' workflow execution

Workflow outputs are saved in plain_text_infracost and infracost runtime properties.

migrate_state

You can migrate from local Terraform state file to a hosted storage state, such as S3 or Azure storage account, with the migrate_state workflow. This command exposes the functionality of terraform init -migrate-state. This workflow wraps the terraform.migrate interface for the cloudify.nodes.terraform.Module node type.

That operation accepts two parameters:

The backend parameter is a dict with two static keys:

You can invoke the migrate_state workflow from the CLI like this, using a YAML file describing the required parameters:

cfy executions start migrate_state -d [DEPLOYMENT_ID] -p migrate-state-params.yaml 

Example migrate-state-params.yaml file:

node_ids:
  - cloud_resources
backend:
  name: s3
  options:
    bucket: foo
    key: bar
    region: var.aws_region
    access_key: var.access_key
    secret_key: var.secret_key
backend_config:
  bucket: foo
  key: bar
  region: us-east-2
  access_key: { get_secret: aws_access_key_id }
  secret_key: { get_secret: aws_secret_access_key }

Terraform Outputs

You can expose outputs from your Terraform template to the node instance runtime properties.

For example, you can expose a simple message by adding the outputs block to your main.tf:

output "foo" {
  value = "bar"
}

You can also expose meaningful information like IP addresses, Subnets, and ports.

output "ip" {
  value = aws_instance.example_vm.id

This information will be stored during the install workflow, or the reload_terraform_template workflow.

[user@cloudify-manager ~]# cfy node-instances get cloud_resources_02mhg1 --json | jq -r '.runtime_properties.outputs'
{
  "foo": {
    "sensitive": false,
    "type": "string",
    "value": "bar"
  }
}

You can then use these outputs in blueprint, for example as deployment capabilities:

capabilities:
  ip:
    value: { get_attribute: [ cloud_resources , outputs , ip , value ] }

NOTE: You must expose the output in the main terraform file in the source_path provided in your template or in your reload_terraform_template workflow parameters.

Notes