Installing Wind River Conductor (Helm)
Overview
Use the following step to deploy a Conductor Manager to Kubernetes with a helm chart.
Configure Conductor Helm values
- Get the
wind-river-conductor-24.9.0.tgz
chart file from licensed Wind River registry. - Create an
override-values.yaml
as described in “Configuration parameters” below and customize the fields as required.
2.1. To run on a WRCP host, for example, it’s recommended to use cephfs storage class on all occurrences ofstorageClass
andmgmtworker.volume.pvc.class
in that file. - To be able to access Conductor UI and external CLI, configure an Ingress service.
Configure Ingress service
Ingress without hostname (Nginx example)
To access Conductor UI and CLI without a hostname, using directly the IP address where it’s installed, use the following:
# override-values.yaml
ingress:
enabled: true
host:
ingressClassName: nginx
Ingress with hostname (Nginx example)
To expose Conductor through domain hostname, make sure to have it on network’s DNS server or add it to system’s hosts file.
# override-values.yaml
ingress:
enabled: true
host: mydomain-conductor.com
ingressClassName: nginx
# /etc/hosts Linux example
10.10.100.50 mydomain-conductor.com
Expose Ingress for WRCP Systems (Nginx)
After following previous section, it’s necessary to create a Global Network Policy to allow traffic on HTTP port 80. For HTTPS access, user must setup production-ready certificates and customize Conductor helm values.
Create the following global-networkpolicy.yaml
on the WRCP controller host and run kubectl apply -f global-networkpolicy.yaml
. This will allow ingress traffic through OAM IP address.
# This rule opens up default HTTP port 80
# To apply use:
# kubectl apply -f global-networkpolicy.yaml
apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
name: gnp-oam-overrides
spec:
ingress:
- action: Allow
destination:
ports:
- 80
protocol: TCP
order: 500
selector: has(iftype) && iftype == 'oam'
types:
- Ingress
Example of override-values.yaml
# use these values override to run the wind-river-conductor chart
composer_backend:
image: (registry-link)/composer-backend:latest
affinity: {}
nodeSelector: {}
tolerations: []
composer_frontend:
image: (registry-link)/composer-frontend:latest
affinity: {}
nodeSelector: {}
tolerations: []
execution_scheduler:
image: (registry-link)/execution-scheduler:latest
affinity: {}
nodeSelector: {}
tolerations: []
mgmtworker:
image: (registry-link)/mgmtworker:latest
affinity: {}
nodeSelector: {}
tolerations: []
volume:
pvc:
name: "mgmtworker-pvc"
size: "10Gi"
modes:
- "ReadWriteOnce"
class: ""
rest_service:
image: (registry-link)/restservice:latest
s3:
clientImage: (registry-link)/aws-cli:2.15.52
config:
manager:
security:
# When changing these credentials, must set into wrc_endpoint_secret.data.username and .password
# or make sure to use these anchors
admin_username: &wrc_admin_user admin
admin_password: &wrc_admin_pass admin
curl_image: (registry-link)/alpine/curl:8.5.0
bind_host: "[::]"
affinity: {}
nodeSelector: {}
tolerations: []
api_service:
image: (registry-link)/apiservice:latest
bind_host: "[::]"
affinity: {}
nodeSelector: {}
tolerations: []
stage_backend:
image: (registry-link)/stage-backend:latest
affinity: {}
nodeSelector: {}
tolerations: []
stage_frontend:
image: (registry-link)/stage-frontend:latest
affinity: {}
nodeSelector: {}
tolerations: []
wrc_endpoint_secret:
# Must match rest_service.config.manager.security.admin_username and .admin_password values or aliases
data:
username: *wrc_admin_user
password: *wrc_admin_pass
system_inventory_manager:
image: (registry-link)/system-inventory-manager:latest
affinity: {}
nodeSelector: {}
tolerations: []
upgrade_policy_manager:
image: (registry-link)/upgrade-policy-manager:latest
affinity: {}
nodeSelector: {}
tolerations: []
upgrade_group_manager:
image: (registry-link)/upgrade-group-manager:latest
affinity: {}
nodeSelector: {}
tolerations: []
wrc_secret:
image: (registry-link)/wrc-secret-operator:latest
affinity: {}
nodeSelector: {}
tolerations: []
backup_group_manager:
image: (registry-link)/backup-group-manager:latest
affinity: {}
nodeSelector: {}
tolerations: []
rest_api_server:
image: (registry-link)/rest-api-app:latest
affinity: {}
nodeSelector: {}
tolerations: []
bind_host: "::"
service:
name: "rest-api-server"
type: ClusterIP
nginx:
image: (registry-link)/nginxinc/nginx-unprivileged:1.25.4
replicas: 1
affinity: {}
nodeSelector: {}
tolerations: []
seaweedfs:
global:
storageClass: ""
image:
registry: (registry-link)
repository: bitnami/seaweedfs
tag: 3.72.0-debian-12-r0
master:
persistence:
enabled: true
mountPath: /data
subPath: ""
storageClass: ""
annotations: {}
accessModes:
- ReadWriteOnce
size: 8Gi
existingClaim: ""
selector: {}
dataSource: {}
volume:
dataVolumes:
-
name: data-0
mountPath: /data-0
subPath: ""
persistence:
enabled: true
storageClass: ""
annotations: {}
accessModes:
- ReadWriteOnce
size: 8Gi
existingClaim: ""
selector: {}
dataSource: {}
mariadb:
image:
registry: (registry-link)
repository: bitnami/mariadb
tag: 11.4.3-debian-12-r0
primary:
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
volumePermissions:
enabled: false
image:
registry: (registry-link)
repository: bitnami/os-shell
tag: 12-debian-12-r27
metrics:
enabled: false
image:
registry: (registry-link)
repository: bitnami/mysqld-exporter
tag: 0.15.1-debian-12-r30
volumePermissions:
enabled: false
image:
registry: (registry-link)
repository: bitnami/os-shell
tag: 12-debian-12-r27
prometheus:
alertmanager:
image:
registry: ""
repository: (registry-link)/alertmanager
tag: 0.25.0-debian-11-r62
server:
persistence:
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
image:
registry: ""
repository: (registry-link)/prometheus
tag: 2.45.0-debian-11-r0
thanos:
image:
registry: ""
repository: (registry-link)/thanos
tag: 0.31.0-scratch-r8
nodeSelector: {}
tolerations: []
affinity: {}
volumePermissions:
image:
registry: ""
repository: (registry-link)/bitnami-shell
tag: 11-debian-11-r130
rabbitmq:
image:
registry: ""
repository: (registry-link)/rabbitmq
tag: 3.12.2-debian-11-r8
persistence:
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
volumePermissions:
image:
registry: ""
repository: (registry-link)/os-shell
tag: 11-debian-11-r16
affinity: {}
nodeSelector: {}
tolerations: []
postgresql:
image:
registry: ""
repository: (registry-link)/postgresql
tag: 15.3.0-debian-11-r17
primary:
persistence:
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
affinity: {}
nodeSelector: {}
tolerations: []
readReplicas: # ignored if architecture != "replication"; default is "standalone"
persistence:
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
affinity: {}
nodeSelector: {}
tolerations: []
volumePermissions:
image:
registry: ""
repository: (registry-link)/bitnami-shell
tag: 11-debian-11-r130
metrics:
image:
registry: ""
repository: (registry-link)/postgres-exporter
tag: 0.13.1-debian-11-r0
kube-state-metrics:
image:
registry: ""
repository: (registry-link)/kube-state-metrics
tag: 2.9.2-debian-11-r14
affinity: {}
nodeSelector: {}
tolerations: []
ingress:
enabled: true
host:
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# TLS settings
tls: false
secretName: cfy-secret-name
# These files should be provided by a HTTP/File server
resources:
packages:
agents:
manylinux-x86_64-agent.tar.gz: (artifactory-link)/manylinux-x86_64-agent_7.0.0-ga.tar.gz
manylinux-aarch64-agent.tar.gz: (artifactory-link)/manylinux-aarch64-agent_7.0.0-ga.tar.gz
cloudify-windows-agent.exe: (artifactory-link)/cloudify-windows-agent_7.0.0-ga.exe
metrics_job_operator:
image: (registry-link)/metrics-job-operator:latest
affinity: {}
nodeSelector: {}
tolerations: []
metrics_cron_job:
image: (registry-link)/metrics-cleanup:latest
Note When using a non-default username/password, make sure rest_service.config.manager.security.admin_username and .admin_password values match wrc_endpoint_secret.data.username and .password
Install Conductor Helm application
After configuring override-values.yaml
file, run the following command to install:
helm install wind-river-conductor -f ./override-values.yaml ./wind-river-conductor-24.9.0.tgz
The installation is complete when all pods are running as shown in the example below.
If the Ingress service is configured correctly, Conductor UI should be available at http://<helm-host-ip>/console
kubectl get pods
NAME READY STATUS RESTARTS AGE
api-service-7c44474464-dcmpp 1/1 Running 0 115m
backup-group-manager-7f9c56d869-tl9zl 1/1 Running 0 115m
composer-backend-574fd64c86-hrtpn 1/1 Running 3 (115m ago) 115m
composer-frontend-77597547dd-q245q 1/1 Running 0 115m
execution-scheduler-545b9f4c4b-wlf84 1/1 Running 0 115m
kube-state-metrics-684b8d79cf-gz9tx 1/1 Running 0 115m
metrics-job-operator-8657b654b8-6pvgq 1/1 Running 0 115m
mgmtworker-0 1/1 Running 2 (113m ago) 115m
nginx-7f846db845-bpnnc 1/1 Running 0 115m
postgresql-0 2/2 Running 0 115m
prometheus-server-7f8c78f68f-ctjnn 1/1 Running 0 115m
rabbitmq-0 1/1 Running 0 115m
rest-api-server-54b75bbf58-7scv7 1/1 Running 0 115m
rest-service-66f5bfd88c-jw942 1/1 Running 0 115m
seaweedfs-filer-0 1/1 Running 0 115m
seaweedfs-master-0 1/1 Running 0 115m
seaweedfs-s3-59f9dddd8f-ww6fj 1/1 Running 1 (114m ago) 115m
seaweedfs-volume-0 1/1 Running 0 115m
stage-backend-f96b49f7b-4zzpv 1/1 Running 3 (115m ago) 115m
stage-frontend-5b44dd7bb5-ssrpx 1/1 Running 0 115m
system-inventory-manager-7567b7c794-k9psk 1/1 Running 0 115m
upgrade-group-manager-7b86d6b5f-8t8hq 1/1 Running 0 115m
upgrade-policy-manager-ffd796596-5gcmn 1/1 Running 0 115m
wrc-secret-operator-7f85dd79c5-hz8rz 1/1 Running 0 115m
wrc-services-mariadb-0 1/1 Running 0 115m
Accessing Conductor external CLI
Install the package using Python package manager, optionally on a virtualenv following these steps.
(cli)> cfy profiles use 10.10.100.50 -u admin -p admin
Attempting to connect to 10.10.100.50 through port 80, using http (SSL mode: False)...
Using manager 10.10.100.50 with port 80
Initializing profile 10.10.100.50...
Initialization completed successfully
It is highly recommended to have more than one manager in a Cloudify cluster
Adding cluster node localhost to local profile manager cluster
Adding cluster node rabbitmq to local profile broker cluster
Profile is up to date with 2 nodes
(cli)> cfy profiles list
Listing all profiles...
Profiles:
+----------------+---------------+------------------+----------------+----------+---------+----------+--------------+-----------+---------------+------------------+
| name | manager_ip | manager_username | manager_tenant | ssh_user | ssh_key | ssh_port | kerberos_env | rest_port | rest_protocol | rest_certificate |
+----------------+---------------+------------------+----------------+----------+---------+----------+--------------+-----------+---------------+------------------+
| *10.10.100.50 | 10.10.100.50 | admin | | | | 22 | False | 80 | http | |
+----------------+---------------+------------------+----------------+----------+---------+----------+--------------+-----------+---------------+------------------+
(cli)> cfy status
Retrieving manager services status... [ip=10.10.100.50]
Services:
+--------------------------------+--------+
| service | status |
+--------------------------------+--------+
| Webserver | Active |
| Management Worker | Active |
| Manager Rest-Service | Active |
| Cloudify API | Active |
| Cloudify Execution Scheduler | Active |
| PostgreSQL | Active |
| RabbitMQ | Active |
| Monitoring Service | Active |
| Kubernetes State Metrics | Active |
| SeaweedFS Master | Active |
| Cloudify Console Backend | Active |
| Cloudify Console Frontend | Active |
| Cloudify Composer Backend | Active |
| Cloudify Composer Frontend | Active |
+--------------------------------+--------+
Uninstalling Helm
To uninstall Helm, use the following command:
helm uninstall wind-river-conductor
All pods need to be deleted.
Note
After uninstalling Helm, PersistentVolumeClaims (PVCs) and PersistentVolumes (PVs) created by Seaweedfs, RabbitMQ, PostgreSQL and WRC MGMTWORKER statefulset will not be deleted.
> kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-0-seaweedfs-volume-0 Bound pvc-66bc33d2-722f-4aed-a670-f5f8884e9c38 8Gi RWO local-path <unset> 117m
data-postgresql-0 Bound pvc-5aea3f6e-0eea-48d9-a2ae-521849214a6a 8Gi RWO local-path <unset> 117m
data-rabbitmq-0 Bound pvc-2a4365a6-223e-44f2-b2e2-57caa3eaf752 8Gi RWO local-path <unset> 117m
data-seaweedfs-master-0 Bound pvc-66df504f-b80d-4161-9374-919fe3bd5def 8Gi RWO local-path <unset> 117m
data-wrc-services-mariadb-0 Bound pvc-cc8c7984-64ba-4dc1-9204-e79bc42691e9 8Gi RWO local-path <unset> 117m
mgmtworker-pvc-mgmtworker-0 Bound pvc-c291f4b0-4a62-43db-9585-c2bacd1b411d 10Gi RWO local-path <unset> 117m
prometheus-server Bound pvc-4ad19280-7067-45fb-8235-e83f45640287 8Gi RWO local-path <unset> 117m
Bitnami’s charts and the WRC’s mgmtworker includes a Statefulset which uses VolumeClaimTemplates to generate new Persistent Volume Claims (PVCs) for each replica created, Helm does not track those PVCs. Therefore, when uninstalling a chart release with these characteristics, the PVCs (and associated Persistent Volumes) are not removed from the cluster. This is a know issue from https://github.com/helm/helm/issues/5156 / https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#persistence-volumes-pvs-retained-from-previous-releases
These PVCs/PVs can be re-used if the Helm deployment is performed again or they can be cleaned up manually as a post-removal step if Conductor is to be removed permanently or a full cleanup is necessary.
To delete these volumes, use: kubectl -n <pvc_namespace> delete pvc <pvc_name>
Using IPv6
When deploying on an IPv6-only cluster, additional settings must be applied to make RabbitMQ compatible. See values-ipv6.yaml
for the changed parameters, or use this values file directly when installing the chart.
Remember that when using an IPv6 cluster, the ingress controller must support IPv6 as well.
rabbitmq:
initContainers:
- name: ipv6-init
image: "(registry-link)/busybox:1.33.1"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: ipv6-cfg
mountPath: /ipv6
command: ['sh', '-c', 'echo "{inet6, true}." > /ipv6/erl_inetrc']
extraVolumes:
- name: ipv6-cfg
emptyDir: {}
extraVolumeMounts:
- name: ipv6-cfg
mountPath: /ipv6
extraEnvVars:
- name: RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS
value: "-kernel inetrc '/ipv6/erl_inetrc' -proto_dist inet6_tcp"
- name: RABBITMQ_CTL_ERL_ARGS
value: "-proto_dist inet6_tcp"
extraConfiguration: |-
management.ssl.ip = ::
management.ssl.port = 15671
management.ssl.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem
management.ssl.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem
management.ssl.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem
rest_service:
bind_host: "[::]"
api_service:
bind_host: "[::]"
rest_api_server:
bind_host: "::"
Scaling
There’s several components you might want to scale up:
- RabbitMQ: the RabbitMQ chart provides out-of-the-box clustering and the amount of replicas can be controlled via the
rabbitmq.replicaCount
setting. - PostgreSQL: the PostgreSQL chart allows primary/replica replication (active/passive replication scheme with read replicas). To enable it, set
postgresql.architecture
toreplication
. When doing so, point the DB clients at the primary pod by settingdb.host
topostgresql-primary
(read replicas are unused by Conductor). - Mgmtworker: set
mgmtworker.replicas
to a number higher than 1, to start multiple mgmtworker instances, allowing better operation concurrency.
For an example of a scaled-up deployment, see the scaled-up-values.yaml
example values file.
# This is an example values file showing how to scale up several services
# rabbitmq and mgmtworker can be scaled by just increasing the pod count
rabbitmq:
replicaCount: 2
mgmtworker:
replicas: 2
# with the bitnami postgresql chart, using architecture=replication will
# start a primary and a read replica
postgresql:
architecture: replication
readReplicas:
replicaCount: 1
# all the db-using services needs to be instructed to connect to the primary pod
db:
host: "postgresql-primary"
Configuration parameters
Key | Type | Default | Description |
---|---|---|---|
api_service | object | object |
Specify configuration for api_service component |
api_service.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
api_service.bind_host | string |
"[::]"
|
Optionally specify the bind host address for the service. Change to 0.0.0.0 for IPv4 only Kubernetes clusters if the default fails. `"[::]"` will bind to IPv6 if available, in case of an IPv6 cluster, or it will fall back to binding to IPv4. IMPORTANT: The IPv6-to-IPv4 might not work on all container base systems and misconfigured Kubernetes clusters and hosts. |
api_service.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
api_service.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
api_service.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
api_service.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
api_service.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
api_service.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
api_service.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
api_service.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
api_service.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
api_service.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
api_service.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
api_service.image | string |
"(registry-link)/cloudify-manager-apiservice:latest"
|
Specify the Docker image for the container |
api_service.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
api_service.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
api_service.livenessProbe | object |
{}
|
Optionally specify the liveness probe configuration |
api_service.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
api_service.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
api_service.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
api_service.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
api_service.port | int |
8101
|
Optionally specify the port number for the container to listen on |
api_service.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
api_service.probes | object |
{
"liveness": {
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify probe configurations for the container |
api_service.probes.liveness | object |
{
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
api_service.probes.liveness.enabled | bool |
true
|
Optionally specify whether the liveness probe is enabled to check container health |
api_service.probes.liveness.failureThreshold | int |
3
|
Optionally specify the number of failed liveness checks before restarting the container |
api_service.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before the liveness probe starts |
api_service.probes.liveness.periodSeconds | int |
20
|
Optionally specify the interval between liveness probe checks |
api_service.probes.liveness.successThreshold | int |
1
|
Optionally specify the number of successful liveness checks before considering the container healthy |
api_service.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for each liveness probe |
api_service.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
api_service.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
api_service.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
api_service.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
api_service.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
api_service.serviceAnnotations | object |
{}
|
Optionally specify annotations to add to the service |
api_service.serviceLabels | object |
{}
|
Optionally specify labels to add to the service |
api_service.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
api_service.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
api_service.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
api_service.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
api_service.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
backup_group_manager | object | object |
Specify configuration for backup_group_manager component |
backup_group_manager.affinity | object |
{}
|
Optionally specify the affinity rules for scheduling the pods. Defines rules for pod placement based on node characteristics |
backup_group_manager.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
backup_group_manager.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
backup_group_manager.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
backup_group_manager.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
backup_group_manager.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
backup_group_manager.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
backup_group_manager.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
backup_group_manager.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
backup_group_manager.deploymentAnnotations | object |
{}
|
Optionally specify annotations for the deployment. Key-value pairs that can be used to attach metadata to the deployment |
backup_group_manager.deploymentLabels | object |
{}
|
Optionally specify labels for the deployment. Key-value pairs that can be used to categorize and select deployments |
backup_group_manager.hostAliases | object |
{}
|
Optionally specify host aliases for the pod. Defines additional hostnames and IP addresses to be added to the pod’s /etc/hosts file |
backup_group_manager.image | string |
"(registry-link)/backup-group-manager:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
backup_group_manager.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
backup_group_manager.lifecycle | object |
{}
|
Optionally specify lifecycle hooks for the container. Defines actions to be taken at specific points in the container lifecycle |
backup_group_manager.livenessProbe | object |
{}
|
Optionally specify liveness probes for the container. Defines the probe to check if the container is alive |
backup_group_manager.nodeSelector | object |
{}
|
Optionally specify node selectors for scheduling the pods. Defines node labels that the pods must match |
backup_group_manager.podAnnotations | object |
{}
|
Optionally specify annotations for the pod. Key-value pairs that can be used to attach metadata to the pod |
backup_group_manager.podLabels | object |
{}
|
Optionally specify labels for the pod. Key-value pairs that can be used to categorize and select pods |
backup_group_manager.podSecurityContext | object |
{}
|
Optionally specify the pod security context. Defines security-related settings for the pod |
backup_group_manager.port | int |
8080
|
Optionally specify the port that the container will listen on |
backup_group_manager.priorityClassName | object |
{}
|
Optionally specify the priority class for the pod. Defines the priority of the pod relative to other pods |
backup_group_manager.readinessProbe | object |
{}
|
Optionally specify readiness probes for the container. Defines the probe to check if the container is ready to accept traffic |
backup_group_manager.replicas | int |
1
|
Optionally specify the number of replicas for the deployment. Defines how many instances of the application will be run |
backup_group_manager.resources | object |
{
"limits": {
"memory": "512Mi"
},
"requests": {
"memory": "256Mi"
}
}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
backup_group_manager.resources.limits | object |
{
"memory": "512Mi"
}
|
Optionally specify the memory limits for the container. Defines the maximum amount of memory the container is allowed to use |
backup_group_manager.resources.requests | object |
{
"memory": "256Mi"
}
|
Optionally specify the memory requests for the container. Defines the minimum amount of memory the container needs |
backup_group_manager.schedulerName | object |
{}
|
Optionally specify the scheduler name for the pod. Defines which scheduler will be used to schedule the pod |
backup_group_manager.serviceAccountName | object |
{}
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
backup_group_manager.startupProbe | object |
{}
|
Optionally specify startup probes for the container. Defines the probe to check if the container has started successfully |
backup_group_manager.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the termination grace period for the pod. Defines the time to wait before forcefully terminating the pod |
backup_group_manager.tolerations | list |
[]
|
Optionally specify tolerations for scheduling the pods. Defines how the pods tolerate node taints |
backup_group_manager.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for the pod. Defines constraints for spreading pods across nodes or other topological domains |
backup_group_manager.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment. Defines how updates to the deployment are applied |
certs.ca_cert | string |
""
|
Optionally specify the certificate authority certificate. When customizing it is mandatory to set the 'ca_key' as well |
certs.ca_key | string |
""
|
Optionally specify the Certificate Authority private key. When customizing it is mandatory to set the 'ca_cert' as well |
certs.external_cert | string |
""
|
Optionally specify the certificate for external communication. When customizing it is mandatory to set the 'external_key' as well |
certs.external_key | string |
""
|
Optionally specify the certificate private key for external communication. When customizing it is mandatory to set the 'external_cert' as well |
certs.internal_cert | string |
""
|
Optionally specify the certificate for internal communication. When customizing it is mandatory to set the 'internal_key' as well |
certs.internal_key | string |
""
|
Optionally specify the certificate private key for internal communication. When customizing it is mandatory to set the 'internal_cert' as well |
certs.rabbitmq_cert | string |
""
|
Optionally specify the certificate for communication with rabbitmq. When customizing it is mandatory to set the 'rabbitmq_key' as well |
certs.rabbitmq_key | string |
""
|
Optionally specify the certificate for communication with rabbitmq. When customizing it is mandatory to set the 'rabbitmq_cert' as well |
composer_backend | object | object |
Specify configuration for composer_backend component |
composer_backend.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
composer_backend.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
composer_backend.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
composer_backend.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
composer_backend.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
composer_backend.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
composer_backend.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
composer_backend.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
composer_backend.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
composer_backend.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
composer_backend.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
composer_backend.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
composer_backend.image | string |
"(registry-link)/cloudify-manager-composer-backend:latest"
|
Specify the Docker image for the container |
composer_backend.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
composer_backend.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
composer_backend.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
composer_backend.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
composer_backend.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
composer_backend.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
composer_backend.port | int |
3000
|
Optionally specify the port number for the container to listen on |
composer_backend.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
composer_backend.probes | object |
{
"liveness": {
"enabled": false,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify probe configurations for the container |
composer_backend.probes.liveness | object |
{
"enabled": false,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
composer_backend.probes.liveness.enabled | bool |
false
|
Optionally specify whether the liveness probe is enabled to check container health |
composer_backend.probes.liveness.failureThreshold | int |
3
|
Optionally specify the number of failed liveness checks before restarting the container |
composer_backend.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before the liveness probe starts |
composer_backend.probes.liveness.periodSeconds | int |
20
|
Optionally specify the interval between liveness probe checks |
composer_backend.probes.liveness.successThreshold | int |
1
|
Optionally specify the number of successful liveness checks before considering the container healthy |
composer_backend.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for each liveness probe |
composer_backend.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
composer_backend.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
composer_backend.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
composer_backend.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
composer_backend.service | object |
{
"clusterIP": "",
"externalTrafficPolicy": "",
"loadBalancerIP": "",
"loadBalancerSourceRanges": "",
"sessionAffinity": "",
"sessionAffinityConfig": {},
"type": "ClusterIP"
}
|
Optionally specify the service settings |
composer_backend.service.clusterIP | string |
""
|
Optionally specify the cluster IP address for the service |
composer_backend.service.externalTrafficPolicy | string |
""
|
Optionally specify the external traffic policy for the service |
composer_backend.service.loadBalancerIP | string |
""
|
Optionally specify a static IP address for the load balancer |
composer_backend.service.loadBalancerSourceRanges | string |
""
|
Optionally specify allowed source ranges for the load balancer |
composer_backend.service.sessionAffinity | string |
""
|
Optionally specify the session affinity configuration for the service |
composer_backend.service.sessionAffinityConfig | object |
{}
|
Optionally specify additional session affinity settings |
composer_backend.service.type | string |
"ClusterIP"
|
Optionally specify the type of service (e.g., ClusterIP, NodePort, LoadBalancer) |
composer_backend.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
composer_backend.serviceAnnotations | object |
{}
|
Optionally specify annotations to add to the service |
composer_backend.serviceLabels | object |
{}
|
Optionally specify labels to add to the service |
composer_backend.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
composer_backend.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
composer_backend.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
composer_backend.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
composer_backend.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
composer_frontend | object | object |
Specify configuration for composer_frontend component |
composer_frontend.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
composer_frontend.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"readOnlyRootFilesystem": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
composer_frontend.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
composer_frontend.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
composer_frontend.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
composer_frontend.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
composer_frontend.containerSecurityContext.readOnlyRootFilesystem | bool |
true
|
Optionally specify if the container's root filesystem should be read-only |
composer_frontend.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
composer_frontend.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
composer_frontend.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
composer_frontend.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
composer_frontend.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
composer_frontend.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
composer_frontend.image | string |
"(registry-link)/cloudify-manager-composer-frontend:latest"
|
Specify the Docker image for the container |
composer_frontend.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
composer_frontend.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
composer_frontend.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
composer_frontend.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
composer_frontend.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
composer_frontend.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
composer_frontend.port | int |
8188
|
Optionally specify the port number for the container to listen on |
composer_frontend.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
composer_frontend.probes | object |
{
"liveness": {
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify probe configurations for the container |
composer_frontend.probes.liveness | object |
{
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
composer_frontend.probes.liveness.enabled | bool |
true
|
Optionally specify whether the liveness probe is enabled to check container health |
composer_frontend.probes.liveness.failureThreshold | int |
3
|
Optionally specify the number of failed liveness checks before restarting the container |
composer_frontend.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before the liveness probe starts |
composer_frontend.probes.liveness.periodSeconds | int |
20
|
Optionally specify the interval between liveness probe checks |
composer_frontend.probes.liveness.successThreshold | int |
1
|
Optionally specify the number of successful liveness checks before considering the container healthy |
composer_frontend.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for each liveness probe |
composer_frontend.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
composer_frontend.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
composer_frontend.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
composer_frontend.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
composer_frontend.service | object |
{
"clusterIP": "",
"externalTrafficPolicy": "",
"loadBalancerIP": "",
"loadBalancerSourceRanges": "",
"sessionAffinity": "",
"sessionAffinityConfig": {},
"type": "ClusterIP"
}
|
Optionally specify the service settings |
composer_frontend.service.clusterIP | string |
""
|
Optionally specify the cluster IP address for the service |
composer_frontend.service.externalTrafficPolicy | string |
""
|
Optionally specify the external traffic policy for the service |
composer_frontend.service.loadBalancerIP | string |
""
|
Optionally specify a static IP address for the load balancer |
composer_frontend.service.loadBalancerSourceRanges | string |
""
|
Optionally specify allowed source ranges for the load balancer |
composer_frontend.service.sessionAffinity | string |
""
|
Optionally specify the session affinity configuration for the service |
composer_frontend.service.sessionAffinityConfig | object |
{}
|
Optionally specify additional session affinity settings |
composer_frontend.service.type | string |
"ClusterIP"
|
Optionally specify the type of service (e.g., ClusterIP, NodePort, LoadBalancer) |
composer_frontend.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
composer_frontend.serviceAnnotations | object |
{}
|
Optionally specify annotations to add to the service |
composer_frontend.serviceLabels | object |
{}
|
Optionally specify labels to add to the service |
composer_frontend.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
composer_frontend.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
composer_frontend.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
composer_frontend.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
composer_frontend.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
db | object | object |
Specify configuration for db component |
db.dbName | string |
"cloudify_db"
|
Optionally specify the name of the database to use |
db.host | string |
"postgresql"
|
Optionally specify the hostname or IP address of the database server |
db.k8sSecret | object |
{
"key": "password",
"name": "cloudify-db-creds"
}
|
Optionally specify a Kubernetes Secret to use for the database credentials |
db.k8sSecret.key | string |
"password"
|
Optionally specify the key in the Kubernetes Secret that holds the password value |
db.k8sSecret.name | string |
"cloudify-db-creds"
|
Optionally specify the name of the Kubernetes Secret that contains the database credentials |
db.password | string |
"cloudify"
|
Optionally specify the password for accessing the database |
db.user | string |
"cloudify"
|
Optionally specify the username for accessing the database |
execution_scheduler | object | object |
Specify configuration for execution_scheduler component |
execution_scheduler.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
execution_scheduler.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
execution_scheduler.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
execution_scheduler.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
execution_scheduler.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
execution_scheduler.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
execution_scheduler.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
execution_scheduler.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
execution_scheduler.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
execution_scheduler.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
execution_scheduler.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
execution_scheduler.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
execution_scheduler.image | string |
"(registry-link)/cloudify-manager-execution-scheduler:latest"
|
Specify the Docker image for the container |
execution_scheduler.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
execution_scheduler.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
execution_scheduler.livenessProbe | object |
{}
|
Optionally specify the liveness probe configuration |
execution_scheduler.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
execution_scheduler.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
execution_scheduler.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
execution_scheduler.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
execution_scheduler.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
execution_scheduler.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
execution_scheduler.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
execution_scheduler.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
execution_scheduler.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
execution_scheduler.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
execution_scheduler.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
execution_scheduler.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
execution_scheduler.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
execution_scheduler.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
execution_scheduler.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
fullnameOverride | string |
""
|
Optionally specify a string to fully override name template |
imagePullSecrets | list |
[]
|
Optionally specify an array of secrets. Secrets must be manually created in the namespace. Reference: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ |
ingress | object | object |
Kubernetes ingress configuration for Conductor |
ingress.annotations | object |
{
"nginx.ingress.kubernetes.io/proxy-body-size": "100m"
}
|
Specify metadata configuration for ingress |
ingress.annotations."nginx.ingress.kubernetes.io/proxy-body-size" | string |
"100m"
|
Specify maximum allowed size of the client request body in an NGINX Ingress. Value examples: 100k (kilobytes), 100m (megabytes) and 1g (gigabyte) |
ingress.enabled | bool |
true
|
Specify whether the ingress should be enabled |
ingress.host | string |
null
|
Specify hosting configuration |
ingress.ingressClassName | string |
"nginx"
|
Specify the ingress class that should be used for the ingress resource. It defines which ingress controller will manage this ingress resource |
ingress.secretName | string |
""
|
Specify the kubernetes secret that contains the certificate and private key for TLS. Only use if `tls` is `true` |
ingress.tls | bool |
false
|
Specify wether the ingress tls should be enabled |
kube-state-metrics | object | object |
Parameters group for bitnami/kube-state-metrics helm chart. Details: https://github.com/bitnami/charts/tree/main/bitnami/kube-state-metrics/README.md |
metrics_cron_job.clusterRole | object |
{
"annotations": {},
"labels": {}
}
|
Optionally specify the cluster role annotations |
metrics_cron_job.clusterRole.annotations | object |
{}
|
Optionally specify annotations to add to the cluster role |
metrics_cron_job.clusterRole.labels | object |
{}
|
Optionally specify labels to add to the cluster role |
metrics_cron_job.config.metrics_cleanup | string |
"true"
|
- Whether metrics should be periodically cleaned. Defaults to True. |
metrics_cron_job.config.metrics_cleanup_age | string |
"1y"
|
- How old the metrics need to be to be cleaned by the cleanup service. Supports time intervals in days ("1d"), months ("1m") or years ("1y"). Defaults to 1 year. |
metrics_cron_job.config.metrics_cleanup_periodicity | string |
"0 1 * * *"
|
- How often the metrics are periodically cleaned. Uses the Crontab syntax. Defaults to once a day. |
metrics_cron_job.config.metrics_collection | string |
"true"
|
- Whether metrics should be collected. Defaults to True. |
metrics_cron_job.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
metrics_cron_job.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
metrics_cron_job.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
metrics_cron_job.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
metrics_cron_job.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
metrics_cron_job.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
metrics_cron_job.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
metrics_cron_job.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
metrics_cron_job.image | string |
"(registry-link)/metrics-cleanup:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
metrics_cron_job.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
metrics_cron_job.resources | object |
{}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
metrics_cron_job.serviceAccount | string |
"job-operator-sa"
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
metrics_cron_job.serviceAccountAnnotations | object |
{}
|
Optionally specify annotations to add to the service account |
metrics_cron_job.serviceAccountLabels | object |
{}
|
Optionally specify labels to add to the service account |
metrics_job_operator.affinity | object |
{}
|
|
metrics_job_operator.clusterRole | object |
{
"annotations": {},
"labels": {}
}
|
Optionally specify the cluster role annotations |
metrics_job_operator.clusterRole.annotations | object |
{}
|
Optionally specify annotations to add to the cluster role |
metrics_job_operator.clusterRole.labels | object |
{}
|
Optionally specify labels to add to the cluster role |
metrics_job_operator.configMapAnnotations | object |
{}
|
Optionally specify annotations for the config map. Key-value pairs that can be used to attach metadata to the config map |
metrics_job_operator.configMapLabels | object |
{}
|
Optionally specify labels for the config map. Key-value pairs that can be used to categorize and select config maps |
metrics_job_operator.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
metrics_job_operator.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
metrics_job_operator.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
metrics_job_operator.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
metrics_job_operator.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
metrics_job_operator.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
metrics_job_operator.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
metrics_job_operator.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
metrics_job_operator.deploymentAnnotations | object |
{}
|
Optionally specify annotations for the deployment. Key-value pairs that can be used to attach metadata to the deployment |
metrics_job_operator.deploymentLabels | object |
{}
|
Optionally specify labels for the deployment. Key-value pairs that can be used to categorize and select deployments |
metrics_job_operator.hostAliases | object |
{}
|
Optionally specify host aliases for the pod. Defines additional hostnames and IP addresses to be added to the pod’s /etc/hosts file |
metrics_job_operator.image | string |
"(registry-link)/metrics-job-operator:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
metrics_job_operator.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
metrics_job_operator.lifecycle | object |
{}
|
Optionally specify lifecycle hooks for the container. Defines actions to be taken at specific points in the container lifecycle |
metrics_job_operator.livenessProbe | object |
{}
|
Optionally specify liveness probes for the container. Defines the probe to check if the container is alive |
metrics_job_operator.nodeSelector | object |
{}
|
Optionally specify node selectors for scheduling the pods. Defines node labels that the pods must match |
metrics_job_operator.podAnnotations | object |
{}
|
Optionally specify annotations for the pod. Key-value pairs that can be used to attach metadata to the pod |
metrics_job_operator.podLabels | object |
{}
|
Optionally specify labels for the pod. Key-value pairs that can be used to categorize and select pods |
metrics_job_operator.podSecurityContext | object |
{}
|
Optionally specify the pod security context. Defines security-related settings for the pod |
metrics_job_operator.port | int |
8080
|
Optionally specify the port that the container will listen on |
metrics_job_operator.priorityClassName | object |
{}
|
Optionally specify the priority class for the pod. Defines the priority of the pod relative to other pods |
metrics_job_operator.readinessProbe | object |
{}
|
Optionally specify readiness probes for the container. Defines the probe to check if the container is ready to accept traffic |
metrics_job_operator.replicas | int |
1
|
|
metrics_job_operator.resources | object |
{
"limits": {
"memory": "512Mi"
},
"requests": {
"memory": "256Mi"
}
}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
metrics_job_operator.resources.limits | object |
{
"memory": "512Mi"
}
|
Optionally specify the memory limits for the container. Defines the maximum amount of memory the container is allowed to use |
metrics_job_operator.resources.requests | object |
{
"memory": "256Mi"
}
|
Optionally specify the memory requests for the container. Defines the minimum amount of memory the container needs |
metrics_job_operator.schedulerName | object |
{}
|
Optionally specify the scheduler name for the pod. Defines which scheduler will be used to schedule the pod |
metrics_job_operator.serviceAccount | string |
"job-operator-sa"
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
metrics_job_operator.serviceAccountAnnotations | object |
{}
|
Optionally specify annotations to add to the service account |
metrics_job_operator.serviceAccountLabels | object |
{}
|
Optionally specify labels to add to the service account |
metrics_job_operator.serviceAccountName | object |
{}
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
metrics_job_operator.serviceAnnotations | object |
{}
|
Optionally specify annotations for the service. Key-value pairs that can be used to attach metadata to the service |
metrics_job_operator.serviceLabels | object |
{}
|
Optionally specify labels for the service. Key-value pairs that can be used to categorize and select services |
metrics_job_operator.startupProbe | object |
{}
|
Optionally specify startup probes for the container. Defines the probe to check if the container has started successfully |
metrics_job_operator.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the termination grace period for the pod. Defines the time to wait before forcefully terminating the pod |
metrics_job_operator.tolerations | list |
[]
|
Optionally specify tolerations for scheduling the pods. Defines how the pods tolerate node taints |
metrics_job_operator.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for the pod. Defines constraints for spreading pods across nodes or other topological domains |
metrics_job_operator.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment. Defines how updates to the deployment are applied |
mgmtworker | object | object |
Specify configuration for mgmtworker component |
mgmtworker.access | object |
{
"local_cluster": true
}
|
Optionally specify access settings for the local cluster |
mgmtworker.access.local_cluster | bool |
true
|
Optionally specify whether to allow access to the local cluster |
mgmtworker.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
mgmtworker.clusterRole | object |
{
"annotations": {},
"labels": {}
}
|
Optionally specify annotations for the cluster role |
mgmtworker.clusterRole.annotations | object |
{}
|
Optionally specify annotations to add to the cluster role |
mgmtworker.clusterRole.labels | object |
{}
|
Optionally specify labels to add to the cluster role |
mgmtworker.configMap | object |
{
"annotations": {},
"labels": {}
}
|
Optionally specify annotations for the config map |
mgmtworker.configMap.annotations | object |
{}
|
Optionally specify annotations to add to the config map |
mgmtworker.configMap.labels | object |
{}
|
Optionally specify labels to add to the config map |
mgmtworker.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
mgmtworker.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
mgmtworker.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
mgmtworker.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
mgmtworker.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
mgmtworker.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
mgmtworker.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
mgmtworker.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
mgmtworker.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
mgmtworker.image | string |
"(registry-link)/cloudify-manager-mgmtworker:latest"
|
Specify the Docker image for the container |
mgmtworker.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
mgmtworker.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
mgmtworker.livenessProbe | object |
{}
|
Optionally specify the liveness probe configuration |
mgmtworker.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
mgmtworker.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
mgmtworker.podLabels | object |
{}
|
Optionally specify labels for the pod |
mgmtworker.podManagementPolicy | object |
{}
|
Optionally specify the pod management policy for the stateful set |
mgmtworker.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
mgmtworker.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
mgmtworker.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
mgmtworker.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
mgmtworker.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
mgmtworker.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
mgmtworker.serviceAccount | object |
{
"annotations": {},
"automountServiceAccountToken": true,
"labels": {},
"name": "mgmtworker-serviceaccount"
}
|
Optionally specify the service account settings |
mgmtworker.serviceAccount.annotations | object |
{}
|
Optionally specify annotations to add to the service account |
mgmtworker.serviceAccount.automountServiceAccountToken | bool |
true
|
Optionally specify whether to automatically mount the service account token |
mgmtworker.serviceAccount.labels | object |
{}
|
Optionally specify labels to add to the service account |
mgmtworker.serviceAccount.name | string |
"mgmtworker-serviceaccount"
|
Optionally specify the name of the service account |
mgmtworker.serviceAnnotations | object |
{}
|
Optionally specify annotations to add to the service |
mgmtworker.serviceLabels | object |
{}
|
Optionally specify labels to add to the service |
mgmtworker.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
mgmtworker.statefulSetAnnotations | object |
{}
|
Optionally specify annotations for the stateful set |
mgmtworker.statefulSetLabels | object |
{}
|
Optionally specify labels for the stateful set |
mgmtworker.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
mgmtworker.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
mgmtworker.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
mgmtworker.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
mgmtworker.volume | object |
{
"pvc": {
"class": "",
"modes": [
"ReadWriteOnce"
],
"name": "mgmtworker-pvc",
"size": "10Gi"
}
}
|
Optionally specify the volume settings |
mgmtworker.volume.pvc | object |
{
"class": "",
"modes": [
"ReadWriteOnce"
],
"name": "mgmtworker-pvc",
"size": "10Gi"
}
|
Optionally specify the persistent volume claim settings |
mgmtworker.volume.pvc.class | string |
""
|
Optionally specify the storage class for the persistent volume claim |
mgmtworker.volume.pvc.modes | list |
[
"ReadWriteOnce"
]
|
Optionally specify the access modes for the persistent volume claim |
mgmtworker.volume.pvc.name | string |
"mgmtworker-pvc"
|
Optionally specify the name of the persistent volume claim |
mgmtworker.volume.pvc.size | string |
"10Gi"
|
Optionally specify the size of the persistent volume claim |
nameOverride | string |
""
|
Optionally specify a string to partially override name template |
nginx | object | object |
Specify configuration for nginx component |
nginx.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
nginx.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
nginx.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
nginx.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
nginx.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
nginx.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
nginx.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
nginx.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
nginx.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
nginx.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
nginx.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
nginx.envvars | object |
{
"workerConnections": 4096,
"workerProcess": "auto"
}
|
Optionally specify environment variables for the container |
nginx.envvars.workerConnections | int |
4096
|
Optionally specify the number of worker connections for NGINX |
nginx.envvars.workerProcess | string |
"auto"
|
Optionally specify the number of worker processes for NGINX |
nginx.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
nginx.image | string |
"nginxinc/nginx-unprivileged"
|
Specify the Docker image for the container |
nginx.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
nginx.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
nginx.livenessProbe | object |
{}
|
Optionally specify the liveness probe configuration |
nginx.masterAnnotations | object |
{}
|
Optionally specify annotations to add to the master component |
nginx.masterLabels | object |
{}
|
Optionally specify labels to add to the master component |
nginx.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
nginx.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
nginx.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
nginx.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
nginx.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
nginx.probes | object |
{
"liveness": {
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify probe configurations for the container |
nginx.probes.liveness | object |
{
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
nginx.probes.liveness.enabled | bool |
true
|
Optionally specify whether the liveness probe is enabled to check container health |
nginx.probes.liveness.failureThreshold | int |
3
|
Optionally specify the number of failed liveness checks before restarting the container |
nginx.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before the liveness probe starts |
nginx.probes.liveness.periodSeconds | int |
20
|
Optionally specify the interval between liveness probe checks |
nginx.probes.liveness.successThreshold | int |
1
|
Optionally specify the number of successful liveness checks before considering the container healthy |
nginx.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for each liveness probe |
nginx.rate_limit | object |
{
"burst": 30,
"delay": 20,
"enabled": true,
"memory": "200m",
"rate": "100r/s"
}
|
Optionally specify request rate-limits. If enabled, requests are rate-limited based on the remote IP address. Requests that authenticate with a valid execution-token, are never rate-limited |
nginx.rate_limit.burst | int |
30
|
Optionally specify burst. Burst and delay manage the request queueing mechanism. With the default settings of burst=30 and delay=20, up to 30 requests can be queued per IP (i.e. before nginx starts responding with 503), and the first 20 requests will be served without any delay. Then, requests will be delayed according to the rate, and if there's more than 30 queued total, will receive 503 |
nginx.rate_limit.delay | int |
20
|
Optionally specify delay. Burst and delay manage the request queueing mechanism. Check 'nginx.rate_limit.burst' for more details |
nginx.rate_limit.enabled | bool |
true
|
Optionally specify whether the rate_limit is enabled |
nginx.rate_limit.memory | string |
"200m"
|
Optionally specify the amount of memory allocated for rate limiting |
nginx.rate_limit.rate | string |
"100r/s"
|
Optionally specify rate. It is a string in the form of "10r/s" (10 requests per second) or "600r/m" (600 requests per minute) |
nginx.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
nginx.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
nginx.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
nginx.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
nginx.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
nginx.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
nginx.templatesAnnotations | object |
{}
|
Optionally specify annotations to add to the templates |
nginx.templatesLabels | object |
{}
|
Optionally specify labels to add to the templates |
nginx.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
nginx.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
nginx.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
nginx.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
oas_rbac | object | object |
Specify configuration for oas_rbac ClusterRole |
oas_rbac.annotations | object |
{}
|
Optionally specify the annotations to add to the service account |
oas_rbac.labels | object |
{}
|
Optionally specify the labels to add to the service account |
oas_service_account | object |
{
"serviceAccount": {
"name": "conductor-operator-sa"
}
}
|
Specify configuration for oas_service_account serviceAccount |
oas_service_account.serviceAccount | object |
{
"name": "conductor-operator-sa"
}
|
Optionally specify the service account configuration |
oas_service_account.serviceAccount.name | string |
"conductor-operator-sa"
|
Optionally specify the name of the service account to use |
postgresql | object | object |
Specify configuration for postgresql component |
postgresql.auth | object |
{
"database": "cloudify_db",
"password": "cloudify",
"username": "cloudify"
}
|
Optionally specify authentication settings for the PostgreSQL database |
postgresql.auth.database | string |
"cloudify_db"
|
Optionally specify the name of the PostgreSQL database to use |
postgresql.auth.password | string |
"cloudify"
|
Optionally specify the password for accessing the PostgreSQL database |
postgresql.auth.username | string |
"cloudify"
|
Optionally specify the username for accessing the PostgreSQL database |
postgresql.containerPorts | object |
{
"postgresql": 5432
}
|
Optionally specify container ports to expose for the PostgreSQL service |
postgresql.containerPorts.postgresql | int |
5432
|
Optionally specify the port number for PostgreSQL |
postgresql.enableNetworkPolicy | bool |
true
|
Optionally enable or disable network policies for the PostgreSQL pods |
postgresql.enabled | bool |
true
|
Optionally enable or disable the PostgreSQL deployment |
postgresql.fullnameOverride | string |
"postgresql"
|
Optionally specify a custom name for the PostgreSQL deployment |
postgresql.image | object |
{
"pullPolicy": "IfNotPresent",
"tag": "15.3.0-debian-11-r17"
}
|
Optionally specify the image settings for the PostgreSQL container |
postgresql.image.pullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
postgresql.image.tag | string |
"15.3.0-debian-11-r17"
|
Optionally specify the tag of the PostgreSQL image to use |
postgresql.metrics | object |
{
"containerSecurityContext": {
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"runAsUser": 1001,
"seccompProfile": {
"type": "RuntimeDefault"
}
},
"enabled": true
}
|
Optionally specify the metrics settings for monitoring PostgreSQL |
postgresql.metrics.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"runAsUser": 1001,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the metrics container |
postgresql.metrics.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
postgresql.metrics.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
postgresql.metrics.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
postgresql.metrics.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
postgresql.metrics.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
postgresql.metrics.containerSecurityContext.runAsUser | int |
1001
|
Optionally specify the user ID to run the container as |
postgresql.metrics.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
postgresql.metrics.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
postgresql.metrics.enabled | bool |
true
|
Optionally enable or disable metrics collection |
postgresql.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
postgresql.primary | object | object |
Optionally specify the configuration for the primary PostgreSQL instance |
postgresql.primary.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
postgresql.primary.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"runAsUser": 1001,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the primary PostgreSQL container |
postgresql.primary.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
postgresql.primary.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
postgresql.primary.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
postgresql.primary.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
postgresql.primary.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
postgresql.primary.containerSecurityContext.runAsUser | int |
1001
|
Optionally specify the user ID to run the container as |
postgresql.primary.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
postgresql.primary.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
postgresql.primary.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
postgresql.primary.persistence | object |
{
"accessModes": [
"ReadWriteOnce"
],
"size": "8Gi",
"storageClass": ""
}
|
Optionally specify the persistence settings for the primary PostgreSQL instance |
postgresql.primary.persistence.accessModes | list |
[
"ReadWriteOnce"
]
|
Optionally specify the access mode for the persistent volume claim (e.g., ReadWriteOnce) |
postgresql.primary.persistence.size | string |
"8Gi"
|
Optionally specify the size of the persistent volume |
postgresql.primary.persistence.storageClass | string |
""
|
Optionally specify the storage class for the persistent volume claim |
postgresql.primary.resources | object |
{
"limits": {
"cpu": 2,
"memory": "2Gi"
},
"requests": {
"cpu": 0.5,
"memory": "256Mi"
}
}
|
Optionally specify resources for the primary PostgreSQL container |
postgresql.primary.resources.limits | object |
{
"cpu": 2,
"memory": "2Gi"
}
|
Optionally specify limits for the primary PostgreSQL container |
postgresql.primary.resources.limits.cpu | int |
2
|
Optionally specify the maximum number of CPU cores the container can use |
postgresql.primary.resources.limits.memory | string |
"2Gi"
|
Optionally specify the maximum amount of memory the container can use |
postgresql.primary.resources.requests | object |
{
"cpu": 0.5,
"memory": "256Mi"
}
|
Optionally specify requests for the primary PostgreSQL container |
postgresql.primary.resources.requests.cpu | float |
0.5
|
Optionally specify the minimum number of CPU cores the container is guaranteed |
postgresql.primary.resources.requests.memory | string |
"256Mi"
|
Optionally specify the minimum amount of memory the container is guaranteed |
postgresql.primary.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
postgresql.readReplicas | object | object |
Optionally specify the configuration for read replicas (ignored if architecture is not "replication") |
postgresql.readReplicas.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
postgresql.readReplicas.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"runAsUser": 1001,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the read replicas |
postgresql.readReplicas.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
postgresql.readReplicas.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
postgresql.readReplicas.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
postgresql.readReplicas.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
postgresql.readReplicas.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
postgresql.readReplicas.containerSecurityContext.runAsUser | int |
1001
|
Optionally specify the user ID to run the container as |
postgresql.readReplicas.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
postgresql.readReplicas.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
postgresql.readReplicas.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
postgresql.readReplicas.persistence | object |
{
"accessModes": [
"ReadWriteOnce"
],
"size": "8Gi",
"storageClass": ""
}
|
Optionally specify the persistence settings for the read replicas |
postgresql.readReplicas.persistence.accessModes | list |
[
"ReadWriteOnce"
]
|
Optionally specify the access mode for the persistent volume claim (e.g., ReadWriteOnce) |
postgresql.readReplicas.persistence.size | string |
"8Gi"
|
Optionally specify the size of the persistent volume |
postgresql.readReplicas.persistence.storageClass | string |
""
|
Optionally specify the storage class for the persistent volume claim |
postgresql.readReplicas.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
prometheus | object | object |
Parameters group for bitnami/prometheus helm chart. Details: https://github.com/bitnami/charts/blob/main/bitnami/prometheus/README.md |
rabbitmq | object | object |
Specify configuration for rabbitmq component |
rabbitmq.advancedConfiguration | string |
"[\n {rabbit, [\n {consumer_timeout, undefined}\n ]}\n]."
|
Optionally specify the advanced RabbitMQ configuration |
rabbitmq.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
rabbitmq.auth | object |
{
"erlangCookie": "cloudify-erlang-cookie",
"password": "c10udify",
"tls": {
"enabled": true,
"existingSecret": "rabbitmq-ssl-certs",
"failIfNoPeerCert": false
},
"username": "cloudify"
}
|
Optionally specify the authentication settings for RabbitMQ |
rabbitmq.auth.erlangCookie | string |
"cloudify-erlang-cookie"
|
Optionally specify the Erlang cookie for cluster communication |
rabbitmq.auth.password | string |
"c10udify"
|
Optionally specify the password for accessing RabbitMQ |
rabbitmq.auth.tls | object |
{
"enabled": true,
"existingSecret": "rabbitmq-ssl-certs",
"failIfNoPeerCert": false
}
|
Optionally specify TLS settings for RabbitMQ |
rabbitmq.auth.tls.enabled | bool |
true
|
Optionally enable or disable TLS for RabbitMQ |
rabbitmq.auth.tls.existingSecret | string |
"rabbitmq-ssl-certs"
|
Optionally specify the name of the existing secret containing TLS certificates |
rabbitmq.auth.tls.failIfNoPeerCert | bool |
false
|
Optionally specify whether to fail if no peer certificate is provided |
rabbitmq.auth.username | string |
"cloudify"
|
Optionally specify the username for accessing RabbitMQ |
rabbitmq.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"runAsUser": 1001,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the RabbitMQ container |
rabbitmq.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
rabbitmq.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
rabbitmq.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
rabbitmq.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
rabbitmq.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
rabbitmq.containerSecurityContext.runAsUser | int |
1001
|
Optionally specify the user ID to run the container as |
rabbitmq.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
rabbitmq.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
rabbitmq.enableNetworkPolicy | bool |
true
|
Optionally enable or disable network policies for the RabbitMQ pods |
rabbitmq.enabled | bool |
true
|
Optionally enable or disable the RabbitMQ deployment |
rabbitmq.extraConfiguration | string |
"management.ssl.port = 15671\nmanagement.ssl.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem\nmanagement.ssl.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem\nmanagement.ssl.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem"
|
Optionally specify the extra RabbitMQ configuration |
rabbitmq.extraSecrets | object | object |
Optionally specify additional secrets for RabbitMQ |
rabbitmq.extraSecrets.rabbitmq-load-definition | object | object |
Optionally specify additional secrets for RabbitMQ, particularly for loading definitions |
rabbitmq.extraSecrets.rabbitmq-load-definition."load_definition.json" | string |
"{\n \"vhosts\": [\n {\n \"name\": \"/\"\n }\n ],\n \"users\": [\n {\n \"hashing_algorithm\": \"rabbit_password_hashing_sha256\",\n \"name\": \"{{ .Values.auth.username }}\",\n \"password\": \"{{ .Values.auth.password }}\",\n \"tags\": \"administrator\"\n }\n ],\n \"permissions\": [\n {\n \"user\": \"{{ .Values.auth.username }}\",\n \"vhost\": \"/\",\n \"configure\": \".*\",\n \"write\": \".*\",\n \"read\": \".*\"\n }\n ],\n \"policies\": [\n {\n \"name\": \"logs_queue_message_policy\",\n \"vhost\": \"/\",\n \"pattern\": \"^cloudify-log$\",\n \"priority\": 100,\n \"apply-to\": \"queues\",\n \"definition\": {\n \"message-ttl\": 1200000,\n \"max-length\": 1000000,\n \"ha-mode\": \"all\",\n \"ha-sync-mode\": \"automatic\",\n \"ha-sync-batch-size\": 50\n }\n },\n {\n \"name\": \"events_queue_message_policy\",\n \"vhost\": \"/\",\n \"pattern\": \"^cloudify-events$\",\n \"priority\": 100,\n \"apply-to\": \"queues\",\n \"definition\": {\n \"message-ttl\": 1200000,\n \"max-length\": 1000000,\n \"ha-mode\": \"all\",\n \"ha-sync-mode\": \"automatic\",\n \"ha-sync-batch-size\": 50\n }\n },\n {\n \"name\": \"default_policy\",\n \"vhost\": \"/\",\n \"pattern\": \"^\",\n \"priority\": 1,\n \"apply-to\": \"queues\",\n \"definition\": {\n \"ha-mode\": \"all\",\n \"ha-sync-mode\": \"automatic\",\n \"ha-sync-batch-size\": 50\n }\n }\n ],\n \"queues\": [\n {\n \"arguments\": {},\n \"auto_delete\": false,\n \"durable\": true,\n \"name\": \"cloudify.management_operation\",\n \"type\": \"classic\",\n \"vhost\": \"/\"\n },\n {\n \"arguments\": {},\n \"auto_delete\": false,\n \"durable\": true,\n \"name\": \"cloudify.management_workflow\",\n \"type\": \"classic\",\n \"vhost\": \"/\"\n }\n ],\n \"bindings\": [\n {\n \"arguments\": {},\n \"destination\": \"cloudify.management_operation\",\n \"destination_type\": \"queue\",\n \"routing_key\": \"operation\",\n \"source\": \"cloudify.management\",\n \"vhost\": \"/\"\n },\n {\n \"arguments\": {},\n \"destination\": \"cloudify.management_workflow\",\n \"destination_type\": \"queue\",\n \"routing_key\": \"workflow\",\n \"source\": \"cloudify.management\",\n \"vhost\": \"/\"\n }\n ],\n \"exchanges\": [\n {\n \"arguments\": {},\n \"auto_delete\": false,\n \"durable\": true,\n \"name\": \"cloudify.management\",\n \"type\": \"direct\",\n \"vhost\": \"/\"\n }\n ]\n}\n"
|
Optionally specify the content of the RabbitMQ load definition in JSON format |
rabbitmq.fullnameOverride | string |
"rabbitmq"
|
Optionally specify a custom name for the RabbitMQ deployment |
rabbitmq.image | object |
{
"pullPolicy": "IfNotPresent",
"tag": "3.12.2-debian-11-r8"
}
|
Optionally specify the image settings for the RabbitMQ container |
rabbitmq.image.pullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
rabbitmq.image.tag | string |
"3.12.2-debian-11-r8"
|
Optionally specify the tag of the RabbitMQ image to use |
rabbitmq.loadDefinition | object |
{
"enabled": true,
"existingSecret": "rabbitmq-load-definition"
}
|
Optionally specify the load definition settings for RabbitMQ |
rabbitmq.loadDefinition.enabled | bool |
true
|
Optionally enable or disable loading of definitions |
rabbitmq.loadDefinition.existingSecret | string |
"rabbitmq-load-definition"
|
Optionally specify the existing secret containing the load definition |
rabbitmq.metrics | object |
{
"enabled": true
}
|
Optionally enable or disable metrics collection for RabbitMQ |
rabbitmq.metrics.enabled | bool |
true
|
Optionally enable or disable metrics collection |
rabbitmq.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
rabbitmq.persistence | object |
{
"accessModes": [
"ReadWriteOnce"
],
"size": "8Gi",
"storageClass": ""
}
|
Optionally specify the persistence settings for RabbitMQ |
rabbitmq.persistence.accessModes | list |
[
"ReadWriteOnce"
]
|
Optionally specify the access mode for the persistent volume claim (e.g., ReadWriteOnce) |
rabbitmq.persistence.size | string |
"8Gi"
|
Optionally specify the size of the persistent volume |
rabbitmq.persistence.storageClass | string |
""
|
Optionally specify the storage class for the persistent volume claim |
rabbitmq.plugins | string |
"rabbitmq_management rabbitmq_prometheus rabbitmq_tracing rabbitmq_peer_discovery_k8s"
|
Optionally specify the RabbitMQ plugins to be enabled |
rabbitmq.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
rabbitmq.resources | object |
{
"limits": {
"cpu": 4,
"memory": "1Gi"
},
"requests": {
"cpu": 0.5,
"memory": "512Mi"
}
}
|
Optionally specify resource for the RabbitMQ container |
rabbitmq.resources.limits | object |
{
"cpu": 4,
"memory": "1Gi"
}
|
Optionally specify resource limits for the RabbitMQ container |
rabbitmq.resources.limits.cpu | int |
4
|
Optionally specify the maximum number of CPU cores the container can use |
rabbitmq.resources.limits.memory | string |
"1Gi"
|
Optionally specify the maximum amount of memory the container can use |
rabbitmq.resources.requests | object |
{
"cpu": 0.5,
"memory": "512Mi"
}
|
Optionally specify resource requests for the RabbitMQ container |
rabbitmq.resources.requests.cpu | float |
0.5
|
Optionally specify the minimum number of CPU cores the container is guaranteed |
rabbitmq.resources.requests.memory | string |
"512Mi"
|
Optionally specify the minimum amount of memory the container is guaranteed |
rabbitmq.service | object |
{
"extraPorts": [
{
"name": "manager-ssl",
"port": 15671,
"targetPort": 15671
}
],
"ports": {
"metrics": 15692
}
}
|
Optionally specify the service configuration for RabbitMQ |
rabbitmq.service.extraPorts | list |
[
{
"name": "manager-ssl",
"port": 15671,
"targetPort": 15671
}
]
|
Optionally specify additional ports for the RabbitMQ service |
rabbitmq.service.ports | object |
{
"metrics": 15692
}
|
Optionally specify the ports for the RabbitMQ service |
rabbitmq.service.ports.metrics | int |
15692
|
Optionally specify the port for metrics |
rabbitmq.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
resources | object | object |
Specify custom resources for Conductor |
resources.packages | object | object |
Specify custom packages to be used in Conductor |
resources.packages.agents | object |
{
"cloudify-windows-agent.exe": "https://cloudify-release-eu.s3.amazonaws.com/cloudify/7.0.0/ga-release/cloudify-windows-agent_7.0.0-ga.exe",
"manylinux-aarch64-agent.tar.gz": "https://cloudify-release-eu.s3.amazonaws.com/cloudify/7.0.0/ga-release/manylinux-aarch64-agent_7.0.0-ga.tar.gz",
"manylinux-x86_64-agent.tar.gz": "https://cloudify-release-eu.s3.amazonaws.com/cloudify/7.0.0/ga-release/manylinux-x86_64-agent_7.0.0-ga.tar.gz"
}
|
Optionally specify the agent's link for each operating system. |
rest_api_server | object | object |
Specify configuration for rest_api_server component |
rest_api_server.affinity | object |
{}
|
Optionally specify the affinity rules for scheduling the pods. Defines rules for pod placement based on node characteristics |
rest_api_server.bind_host | string |
"::"
|
Optionally specify the bind host address for the service. Change to 0.0.0.0 for IPv4 only Kubernetes clusters if the default fails. `"::"` will bind to IPv6 if available, in case of an IPv6 cluster, or it will fall back to binding to IPv4. IMPORTANT: The IPv6-to-IPv4 might not work on all container base systems and misconfigured Kubernetes clusters and hosts. |
rest_api_server.configMapAnnotations | object |
{}
|
Optionally specify annotations for the config map. Key-value pairs that can be used to attach metadata to the config map |
rest_api_server.configMapLabels | object |
{}
|
Optionally specify labels for the config map. Key-value pairs that can be used to categorize and select config maps |
rest_api_server.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
rest_api_server.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
rest_api_server.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
rest_api_server.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
rest_api_server.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
rest_api_server.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
rest_api_server.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
rest_api_server.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
rest_api_server.deploymentAnnotations | object |
{}
|
Optionally specify annotations for the deployment. Key-value pairs that can be used to attach metadata to the deployment |
rest_api_server.deploymentLabels | object |
{}
|
Optionally specify labels for the deployment. Key-value pairs that can be used to categorize and select deployments |
rest_api_server.hostAliases | object |
{}
|
Optionally specify host aliases for the pod. Defines additional hostnames and IP addresses to be added to the pod’s /etc/hosts file |
rest_api_server.image | string |
"(registry-link)/rest-api-app:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
rest_api_server.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
rest_api_server.lifecycle | object |
{}
|
Optionally specify lifecycle hooks for the container. Defines actions to be taken at specific points in the container lifecycle |
rest_api_server.livenessProbe | object |
{}
|
Optionally specify liveness probes for the container. Defines the probe to check if the container is alive |
rest_api_server.nodeSelector | object |
{}
|
Optionally specify node selectors for scheduling the pods. Defines node labels that the pods must match |
rest_api_server.podAnnotations | object |
{}
|
Optionally specify annotations for the pod. Key-value pairs that can be used to attach metadata to the pod |
rest_api_server.podLabels | object |
{}
|
Optionally specify labels for the pod. Key-value pairs that can be used to categorize and select pods |
rest_api_server.podSecurityContext | object |
{}
|
Optionally specify the pod security context. Defines security-related settings for the pod |
rest_api_server.port | int |
8000
|
Optionally specify the port that the container will listen on |
rest_api_server.priorityClassName | object |
{}
|
Optionally specify the priority class for the pod. Defines the priority of the pod relative to other pods |
rest_api_server.probes | object |
{
"liveness": {
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify the liveness probe configuration. Defines how the system checks if the container is alive |
rest_api_server.probes.liveness | object |
{
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
rest_api_server.probes.liveness.enabled | bool |
true
|
Optionally enable or disable the liveness probe |
rest_api_server.probes.liveness.failureThreshold | int |
3
|
Optionally specify the failure threshold for the liveness probe |
rest_api_server.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before starting the liveness probe |
rest_api_server.probes.liveness.periodSeconds | int |
20
|
Optionally specify the period for performing the liveness probe |
rest_api_server.probes.liveness.successThreshold | int |
1
|
Optionally specify the success threshold for the liveness probe |
rest_api_server.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for the liveness probe |
rest_api_server.readinessProbe | object |
{}
|
Optionally specify readiness probes for the container. Defines the probe to check if the container is ready to accept traffic |
rest_api_server.replicas | int |
1
|
Optionally specify the number of replicas for the deployment. Defines how many instances of the application will be run |
rest_api_server.resources | object |
{
"limits": {
"memory": "512Mi"
},
"requests": {
"memory": "256Mi"
}
}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
rest_api_server.resources.limits | object |
{
"memory": "512Mi"
}
|
Optionally specify the memory limits for the container. Defines the maximum amount of memory the container is allowed to use |
rest_api_server.resources.requests | object |
{
"memory": "256Mi"
}
|
Optionally specify the memory requests for the container. Defines the minimum amount of memory the container needs |
rest_api_server.schedulerName | object |
{}
|
Optionally specify the scheduler name for the pod. Defines which scheduler will be used to schedule the pod |
rest_api_server.service | object |
{
"name": "rest-api-service",
"protocol": "TCP",
"type": "ClusterIP"
}
|
Optionally specify the service details for the deployment. Defines how the service will be exposed within the cluster |
rest_api_server.service.name | string |
"rest-api-service"
|
Optionally specify the name of the service |
rest_api_server.service.protocol | string |
"TCP"
|
Optionally specify the protocol for the service. "TCP" is the default protocol |
rest_api_server.service.type | string |
"ClusterIP"
|
Optionally specify the type of the service. "ClusterIP" is the default type that exposes the service on a cluster-internal IP |
rest_api_server.serviceAccountName | object |
{}
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
rest_api_server.serviceAnnotations | object |
{}
|
Optionally specify annotations for the service. Key-value pairs that can be used to attach metadata to the service |
rest_api_server.serviceLabels | object |
{}
|
Optionally specify labels for the service. Key-value pairs that can be used to categorize and select services |
rest_api_server.startupProbe | object |
{}
|
Optionally specify startup probes for the container. Defines the probe to check if the container has started successfully |
rest_api_server.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the termination grace period for the pod. Defines the time to wait before forcefully terminating the pod |
rest_api_server.tolerations | list |
[]
|
Optionally specify tolerations for scheduling the pods. Defines how the pods tolerate node taints |
rest_api_server.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for the pod. Defines constraints for spreading pods across nodes or other topological domains |
rest_api_server.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment. Defines how updates to the deployment are applied |
rest_service | object | object |
Specify configuration for rest_service component |
rest_service.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
rest_service.annotations | object |
{}
|
Optionally specify additional annotations |
rest_service.bind_host | string |
"[::]"
|
Optionally specify the bind host address for the service. Change to 0.0.0.0 for IPv4 only Kubernetes clusters if the default fails. `"[::]"` will bind to IPv6 if available, in case of an IPv6 cluster, or it will fall back to binding to IPv4. IMPORTANT: The IPv4-mapped IPv6 addresses might not work on all container base systems and misconfigured Kubernetes clusters and hosts. |
rest_service.clusterRole | object |
{
"annotations": {},
"labels": {}
}
|
Optionally specify the cluster role annotations |
rest_service.clusterRole.annotations | object |
{}
|
Optionally specify annotations to add to the cluster role |
rest_service.clusterRole.labels | object |
{}
|
Optionally specify labels to add to the cluster role |
rest_service.config | object |
{
"manager": {
"file_server_type": "s3",
"hostname": "cloudify-manager",
"private_ip": "localhost",
"prometheus_url": "http://prometheus-server:9090",
"public_ip": "localhost",
"s3_resources_bucket": "resources",
"s3_server_url": "",
"security": {
"admin_password": "admin",
"admin_username": "admin"
}
}
}
|
Optionally specify the configuration for the service manager |
rest_service.config.manager | object |
{
"file_server_type": "s3",
"hostname": "cloudify-manager",
"private_ip": "localhost",
"prometheus_url": "http://prometheus-server:9090",
"public_ip": "localhost",
"s3_resources_bucket": "resources",
"s3_server_url": "",
"security": {
"admin_password": "admin",
"admin_username": "admin"
}
}
|
Optionally specify the configuration for the service manager |
rest_service.config.manager.file_server_type | string |
"s3"
|
Optionally specify the type of file server to use (e.g., s3) |
rest_service.config.manager.hostname | string |
"cloudify-manager"
|
Optionally specify the hostname for the manager |
rest_service.config.manager.private_ip | string |
"localhost"
|
Optionally specify the private IP address for the manager |
rest_service.config.manager.prometheus_url | string |
"http://prometheus-server:9090"
|
Optionally specify the Prometheus URL for monitoring |
rest_service.config.manager.public_ip | string |
"localhost"
|
Optionally specify the public IP address for the manager |
rest_service.config.manager.s3_resources_bucket | string |
"resources"
|
Optionally specify the S3 resources bucket |
rest_service.config.manager.s3_server_url | string |
""
|
Optionally specify the S3 server URL. Ignored and auto-generated when using built-in seaweedfs |
rest_service.config.manager.security | object |
{
"admin_password": "admin",
"admin_username": "admin"
}
|
Optionally specify security settings, including admin username and password |
rest_service.config.manager.security.admin_password | string |
"admin"
|
Optionally specify the admin password for security |
rest_service.config.manager.security.admin_username | string |
"admin"
|
Optionally specify the admin username for security |
rest_service.configMapAnnotations | object |
{}
|
Optionally specify annotations to add to the ConfigMap |
rest_service.configMapLabels | object |
{}
|
Optionally specify labels to add to the ConfigMap |
rest_service.configPath | string |
"/tmp/config.yaml"
|
Optionally specify the path to the configuration file |
rest_service.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsGroup": 1000,
"runAsNonRoot": true,
"runAsUser": 1000,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
rest_service.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
rest_service.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
rest_service.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
rest_service.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
rest_service.containerSecurityContext.runAsGroup | int |
1000
|
Optionally specify the group ID the container should run as |
rest_service.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
rest_service.containerSecurityContext.runAsUser | int |
1000
|
Optionally specify the user ID the container should run as |
rest_service.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
rest_service.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
rest_service.curl_image | string |
"alpine/curl"
|
Optionally specify the image to use for curl operations |
rest_service.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
rest_service.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
rest_service.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
rest_service.image | string |
"(registry-link)/cloudify-manager-restservice:latest"
|
Specify the Docker image for the container |
rest_service.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
rest_service.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
rest_service.livenessProbe | object |
{}
|
Optionally specify the liveness probe configuration |
rest_service.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
rest_service.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
rest_service.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
rest_service.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
rest_service.port | int |
8100
|
Optionally specify the port number for the container to listen on |
rest_service.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
rest_service.probes | object |
{
"liveness": {
"enabled": false,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify probe configurations for the container |
rest_service.probes.liveness | object |
{
"enabled": false,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
rest_service.probes.liveness.enabled | bool |
false
|
Optionally specify whether the liveness probe is enabled to check container health |
rest_service.probes.liveness.failureThreshold | int |
3
|
Optionally specify the number of failed liveness checks before restarting the container |
rest_service.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before the liveness probe starts |
rest_service.probes.liveness.periodSeconds | int |
20
|
Optionally specify the interval between liveness probe checks |
rest_service.probes.liveness.successThreshold | int |
1
|
Optionally specify the number of successful liveness checks before considering the container healthy |
rest_service.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for each liveness probe |
rest_service.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
rest_service.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
rest_service.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
rest_service.s3 | object |
{
"clientImage": "docker.io/amazon/aws-cli:2.15.52",
"credentials_secret_name": "seaweedfs-s3-auth",
"session_token_secret_name": ""
}
|
Optionally specify S3-related settings for storage and AWS CLI containers (used as init containers) |
rest_service.s3.clientImage | string | object |
Optionally specify the client image for AWS CLI containers |
rest_service.s3.credentials_secret_name | string |
"seaweedfs-s3-auth"
|
Optionally specify the name of the secret containing S3 credentials |
rest_service.s3.session_token_secret_name | string |
""
|
Optionally specify the name of the secret containing the S3 session token |
rest_service.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
rest_service.serviceAccount | string |
"restservice-sa"
|
Optionally specify the service account name to use |
rest_service.serviceAccountAnnotations | object |
{}
|
Optionally specify annotations to add to the service account |
rest_service.serviceAccountLabels | object |
{}
|
Optionally specify labels to add to the service account |
rest_service.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
rest_service.serviceAnnotations | object |
{}
|
Optionally specify annotations to add to the service |
rest_service.serviceLabels | object |
{}
|
Optionally specify labels to add to the service |
rest_service.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
rest_service.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
rest_service.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
rest_service.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
rest_service.type | string |
"ClusterIP"
|
Optionally specify the type of service (e.g., ClusterIP, NodePort, LoadBalancer) |
rest_service.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
seaweedfs | object | object |
Parameters group for seaweed helm chart. Details: https://github.com/bitnami/charts/tree/main/bitnami/seaweedfs |
service.port | int |
8080
|
|
service.type | string |
"ClusterIP"
|
|
serviceAccount | object | object |
Service account for Conductor |
serviceAccount.annotations | object |
{}
|
Specify annotations for service account. Evaluated as a template. Only used if `create` is `true` |
serviceAccount.create | bool |
true
|
Specify whether a service account should be created |
serviceAccount.name | string |
""
|
Name of the service account to use. If not set and create is true, a name is generated using the fullname template |
stage_backend | object | object |
Specify configuration for stage_backend component |
stage_backend.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
stage_backend.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
stage_backend.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
stage_backend.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
stage_backend.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
stage_backend.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
stage_backend.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
stage_backend.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
stage_backend.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
stage_backend.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
stage_backend.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
stage_backend.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
stage_backend.image | string |
"(registry-link)/cloudify-manager-stage-backend:latest"
|
Specify the Docker image for the container |
stage_backend.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
stage_backend.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
stage_backend.livenessProbe | object |
{}
|
Optionally specify the liveness probe configuration |
stage_backend.maps | object |
{
"accessToken": "",
"attribution": "",
"tilesUrlTemplate": ""
}
|
Optionally specify map configuration settings |
stage_backend.maps.accessToken | string |
""
|
Optionally specify the API key to be passed to the map tiles provider |
stage_backend.maps.attribution | string |
""
|
Optionally specify attribution data to be displayed on the map, including HTML if needed. Some providers require this; refer to https://leaflet-extras.github.io/leaflet-providers/preview/ |
stage_backend.maps.tilesUrlTemplate | string |
""
|
Optionally specify the template map tiles provider URL, in the format: 'https://tiles.stadiamaps.com/tiles/osm_bright/${z}/${x}/${y}.png' |
stage_backend.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
stage_backend.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
stage_backend.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
stage_backend.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
stage_backend.port | int |
8088
|
Optionally specify the port number for the container to listen on |
stage_backend.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
stage_backend.probes | object |
{
"liveness": {
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify probe configurations for the container |
stage_backend.probes.liveness | object |
{
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
stage_backend.probes.liveness.enabled | bool |
true
|
Optionally specify whether the liveness probe is enabled to check container health |
stage_backend.probes.liveness.failureThreshold | int |
3
|
Optionally specify the number of failed liveness checks before restarting the container |
stage_backend.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before the liveness probe starts |
stage_backend.probes.liveness.periodSeconds | int |
20
|
Optionally specify the interval between liveness probe checks |
stage_backend.probes.liveness.successThreshold | int |
1
|
Optionally specify the number of successful liveness checks before considering the container healthy |
stage_backend.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for each liveness probe |
stage_backend.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
stage_backend.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
stage_backend.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
stage_backend.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
stage_backend.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
stage_backend.serviceAnnotations | object |
{}
|
Optionally specify annotations to add to the service |
stage_backend.serviceLabels | object |
{}
|
Optionally specify labels to add to the service |
stage_backend.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
stage_backend.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
stage_backend.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
stage_backend.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
stage_backend.type | string |
"ClusterIP"
|
Optionally specify the service type for the Kubernetes service |
stage_backend.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
stage_frontend | object | object |
Specify configuration for stage_frontend component |
stage_frontend.affinity | object |
{}
|
Optionally specify affinity rules for pod scheduling |
stage_frontend.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"readOnlyRootFilesystem": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify security context settings for the container |
stage_frontend.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if the container is allowed to gain additional privileges |
stage_frontend.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
stage_frontend.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
stage_frontend.containerSecurityContext.enabled | bool |
true
|
Optionally specify whether to apply security settings to the container |
stage_frontend.containerSecurityContext.readOnlyRootFilesystem | bool |
true
|
Optionally specify if the root filesystem should be mounted as read-only |
stage_frontend.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container must run as a non-root user |
stage_frontend.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
stage_frontend.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
stage_frontend.deploymentAnnotations | object |
{}
|
Optionally specify annotations to add to the deployment |
stage_frontend.deploymentLabels | object |
{}
|
Optionally specify labels to add to the deployment |
stage_frontend.hostAliases | object |
{}
|
Optionally specify custom host-to-IP mappings for the pod |
stage_frontend.image | string |
"(registry-link)/cloudify-manager-stage-frontend:latest"
|
Specify the Docker image for the container |
stage_frontend.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify when Kubernetes should pull the image (Always, IfNotPresent, or Never) |
stage_frontend.lifecycle | object |
{}
|
Optionally specify the lifecycle hooks for the container |
stage_frontend.livenessProbe | object |
{}
|
Optionally specify the liveness probe configuration |
stage_frontend.nodeSelector | object |
{}
|
Optionally specify node selection constraints for pod scheduling |
stage_frontend.podAnnotations | object |
{}
|
Optionally specify annotations to add to the pods |
stage_frontend.podLabels | object |
{}
|
Optionally specify labels to add to the pods |
stage_frontend.podSecurityContext | object |
{}
|
Optionally specify the security context for the pod |
stage_frontend.port | int |
8188
|
Optionally specify the port number for the container to listen on |
stage_frontend.priorityClassName | object |
{}
|
Optionally specify the priority class name for the pod |
stage_frontend.probes | object |
{
"liveness": {
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
}
|
Optionally specify probe configurations for the container |
stage_frontend.probes.liveness | object |
{
"enabled": true,
"failureThreshold": 3,
"initialDelaySeconds": 20,
"periodSeconds": 20,
"successThreshold": 1,
"timeoutSeconds": 10
}
|
Optionally specify liveness probe settings for the container |
stage_frontend.probes.liveness.enabled | bool |
true
|
Optionally specify whether the liveness probe is enabled to check container health |
stage_frontend.probes.liveness.failureThreshold | int |
3
|
Optionally specify the number of failed liveness checks before restarting the container |
stage_frontend.probes.liveness.initialDelaySeconds | int |
20
|
Optionally specify the initial delay before the liveness probe starts |
stage_frontend.probes.liveness.periodSeconds | int |
20
|
Optionally specify the interval between liveness probe checks |
stage_frontend.probes.liveness.successThreshold | int |
1
|
Optionally specify the number of successful liveness checks before considering the container healthy |
stage_frontend.probes.liveness.timeoutSeconds | int |
10
|
Optionally specify the timeout for each liveness probe |
stage_frontend.readinessProbe | object |
{}
|
Optionally specify the readiness probe configuration |
stage_frontend.replicas | int |
1
|
Optionally specify the number of pod replicas to run |
stage_frontend.resources | object |
{}
|
Optionally specify resource requests and limits for the container |
stage_frontend.schedulerName | object |
{}
|
Optionally specify the scheduler to use for the pod |
stage_frontend.serviceAccountName | object |
{}
|
Optionally specify the service account name to use |
stage_frontend.serviceAnnotations | object |
{}
|
Optionally specify annotations to add to the service |
stage_frontend.serviceLabels | object |
{}
|
Optionally specify labels to add to the service |
stage_frontend.startupProbe | object |
{}
|
Optionally specify the startup probe configuration |
stage_frontend.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the grace period for pod termination |
stage_frontend.tolerations | list |
[]
|
Optionally specify tolerations for pod scheduling on tainted nodes |
stage_frontend.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for pod distribution |
stage_frontend.type | string |
"ClusterIP"
|
Optionally specify the service type for the Kubernetes service |
stage_frontend.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment |
system_inventory_manager | object | object |
Specify configuration for system_inventory_manager component |
system_inventory_manager.affinity | object |
{}
|
Optionally specify the affinity rules for scheduling the pods. Defines rules for pod placement based on node characteristics |
system_inventory_manager.config | object |
{
"rediscover_interval_seconds": 14400,
"resync_interval_seconds": 14400
}
|
Optionally specify extra configurations to the pod. |
system_inventory_manager.config.rediscover_interval_seconds | int |
14400
|
Optionally specify the rediscover interval in seconds. |
system_inventory_manager.config.resync_interval_seconds | int |
14400
|
Optionally specify the resync interval in seconds. |
system_inventory_manager.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
system_inventory_manager.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
system_inventory_manager.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
system_inventory_manager.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
system_inventory_manager.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
system_inventory_manager.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
system_inventory_manager.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
system_inventory_manager.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
system_inventory_manager.deploymentAnnotations | object |
{}
|
Optionally specify annotations for the deployment. Key-value pairs that can be used to attach metadata to the deployment |
system_inventory_manager.deploymentLabels | object |
{}
|
Optionally specify labels for the deployment. Key-value pairs that can be used to categorize and select deployments |
system_inventory_manager.hostAliases | object |
{}
|
Optionally specify host aliases for the pod. Defines additional hostnames and IP addresses to be added to the pod’s /etc/hosts file |
system_inventory_manager.image | string |
"(registry-link)/system-inventory-manager:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
system_inventory_manager.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
system_inventory_manager.lifecycle | object |
{}
|
Optionally specify lifecycle hooks for the container. Defines actions to be taken at specific points in the container lifecycle |
system_inventory_manager.livenessProbe | object |
{}
|
Optionally specify liveness probes for the container. Defines the probe to check if the container is alive |
system_inventory_manager.nodeSelector | object |
{}
|
Optionally specify node selectors for scheduling the pods. Defines node labels that the pods must match |
system_inventory_manager.podAnnotations | object |
{}
|
Optionally specify annotations for the pod. Key-value pairs that can be used to attach metadata to the pod |
system_inventory_manager.podLabels | object |
{}
|
Optionally specify labels for the pod. Key-value pairs that can be used to categorize and select pods |
system_inventory_manager.podSecurityContext | object |
{}
|
Optionally specify the pod security context. Defines security-related settings for the pod |
system_inventory_manager.port | int |
8080
|
Optionally specify the port that the container will listen on |
system_inventory_manager.priorityClassName | object |
{}
|
Optionally specify the priority class for the pod. Defines the priority of the pod relative to other pods |
system_inventory_manager.readinessProbe | object |
{}
|
Optionally specify readiness probes for the container. Defines the probe to check if the container is ready to accept traffic |
system_inventory_manager.replicas | int |
1
|
Optionally specify the number of replicas for the deployment. Defines how many instances of the application will be run |
system_inventory_manager.resources | object |
{
"limits": {
"memory": "512Mi"
},
"requests": {
"memory": "256Mi"
}
}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
system_inventory_manager.resources.limits | object |
{
"memory": "512Mi"
}
|
Optionally specify the memory limits for the container. Defines the maximum amount of memory the container is allowed to use |
system_inventory_manager.resources.requests | object |
{
"memory": "256Mi"
}
|
Optionally specify the memory requests for the container. Defines the minimum amount of memory the container needs |
system_inventory_manager.schedulerName | object |
{}
|
Optionally specify the scheduler name for the pod. Defines which scheduler will be used to schedule the pod |
system_inventory_manager.serviceAccountName | object |
{}
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
system_inventory_manager.startupProbe | object |
{}
|
Optionally specify startup probes for the container. Defines the probe to check if the container has started successfully |
system_inventory_manager.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the termination grace period for the pod. Defines the time to wait before forcefully terminating the pod |
system_inventory_manager.tolerations | list |
[]
|
Optionally specify tolerations for scheduling the pods. Defines how the pods tolerate node taints |
system_inventory_manager.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for the pod. Defines constraints for spreading pods across nodes or other topological domains |
system_inventory_manager.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment. Defines how updates to the deployment are applied |
test_connection | object |
{
"image": "busybox"
}
|
Specify configuration for test_connection component. It is only used for testing purpose |
upgrade_group_manager | object | object |
Specify configuration for upgrade_group_manager component |
upgrade_group_manager.affinity | object |
{}
|
Optionally specify the affinity rules for scheduling the pods. Defines rules for pod placement based on node characteristics |
upgrade_group_manager.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
upgrade_group_manager.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
upgrade_group_manager.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
upgrade_group_manager.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
upgrade_group_manager.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
upgrade_group_manager.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
upgrade_group_manager.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
upgrade_group_manager.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
upgrade_group_manager.deploymentAnnotations | object |
{}
|
Optionally specify annotations for the deployment. Key-value pairs that can be used to attach metadata to the deployment |
upgrade_group_manager.deploymentLabels | object |
{}
|
Optionally specify labels for the deployment. Key-value pairs that can be used to categorize and select deployments |
upgrade_group_manager.hostAliases | object |
{}
|
Optionally specify host aliases for the pod. Defines additional hostnames and IP addresses to be added to the pod’s /etc/hosts file |
upgrade_group_manager.image | string |
"(registry-link)/upgrade-group-manager:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
upgrade_group_manager.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
upgrade_group_manager.lifecycle | object |
{}
|
Optionally specify lifecycle hooks for the container. Defines actions to be taken at specific points in the container lifecycle |
upgrade_group_manager.livenessProbe | object |
{}
|
Optionally specify liveness probes for the container. Defines the probe to check if the container is alive |
upgrade_group_manager.nodeSelector | object |
{}
|
Optionally specify node selectors for scheduling the pods. Defines node labels that the pods must match |
upgrade_group_manager.podAnnotations | object |
{}
|
Optionally specify annotations for the pod. Key-value pairs that can be used to attach metadata to the pod |
upgrade_group_manager.podLabels | object |
{}
|
Optionally specify labels for the pod. Key-value pairs that can be used to categorize and select pods |
upgrade_group_manager.podSecurityContext | object |
{}
|
Optionally specify the pod security context. Defines security-related settings for the pod |
upgrade_group_manager.port | int |
8080
|
Optionally specify the port that the container will listen on |
upgrade_group_manager.priorityClassName | object |
{}
|
Optionally specify the priority class for the pod. Defines the priority of the pod relative to other pods |
upgrade_group_manager.readinessProbe | object |
{}
|
Optionally specify readiness probes for the container. Defines the probe to check if the container is ready to accept traffic |
upgrade_group_manager.replicas | int |
1
|
Optionally specify the number of replicas for the deployment. Defines how many instances of the application will be run |
upgrade_group_manager.resources | object |
{
"limits": {
"memory": "512Mi"
},
"requests": {
"memory": "256Mi"
}
}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
upgrade_group_manager.resources.limits | object |
{
"memory": "512Mi"
}
|
Optionally specify the memory limits for the container. Defines the maximum amount of memory the container is allowed to use |
upgrade_group_manager.resources.requests | object |
{
"memory": "256Mi"
}
|
Optionally specify the memory requests for the container. Defines the minimum amount of memory the container needs |
upgrade_group_manager.schedulerName | object |
{}
|
Optionally specify the scheduler name for the pod. Defines which scheduler will be used to schedule the pod |
upgrade_group_manager.serviceAccountName | object |
{}
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
upgrade_group_manager.startupProbe | object |
{}
|
Optionally specify startup probes for the container. Defines the probe to check if the container has started successfully |
upgrade_group_manager.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the termination grace period for the pod. Defines the time to wait before forcefully terminating the pod |
upgrade_group_manager.tolerations | list |
[]
|
Optionally specify tolerations for scheduling the pods. Defines how the pods tolerate node taints |
upgrade_group_manager.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for the pod. Defines constraints for spreading pods across nodes or other topological domains |
upgrade_group_manager.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment. Defines how updates to the deployment are applied |
upgrade_policy_manager | object | object |
Specify configuration for upgrade_policy_manager component |
upgrade_policy_manager.affinity | object |
{}
|
Optionally specify the affinity rules for scheduling the pods. Defines rules for pod placement based on node characteristics |
upgrade_policy_manager.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
upgrade_policy_manager.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
upgrade_policy_manager.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
upgrade_policy_manager.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
upgrade_policy_manager.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
upgrade_policy_manager.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
upgrade_policy_manager.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
upgrade_policy_manager.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
upgrade_policy_manager.deploymentAnnotations | object |
{}
|
Optionally specify annotations for the deployment. Key-value pairs that can be used to attach metadata to the deployment |
upgrade_policy_manager.deploymentLabels | object |
{}
|
Optionally specify labels for the deployment. Key-value pairs that can be used to categorize and select deployments |
upgrade_policy_manager.hostAliases | object |
{}
|
Optionally specify host aliases for the pod. Defines additional hostnames and IP addresses to be added to the pod’s /etc/hosts file |
upgrade_policy_manager.image | string |
"(registry-link)/upgrade-policy-manager:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
upgrade_policy_manager.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
upgrade_policy_manager.lifecycle | object |
{}
|
Optionally specify lifecycle hooks for the container. Defines actions to be taken at specific points in the container lifecycle |
upgrade_policy_manager.livenessProbe | object |
{}
|
Optionally specify liveness probes for the container. Defines the probe to check if the container is alive |
upgrade_policy_manager.nodeSelector | object |
{}
|
Optionally specify node selectors for scheduling the pods. Defines node labels that the pods must match |
upgrade_policy_manager.podAnnotations | object |
{}
|
Optionally specify annotations for the pod. Key-value pairs that can be used to attach metadata to the pod |
upgrade_policy_manager.podLabels | object |
{}
|
Optionally specify labels for the pod. Key-value pairs that can be used to categorize and select pods |
upgrade_policy_manager.podSecurityContext | object |
{}
|
Optionally specify the pod security context. Defines security-related settings for the pod |
upgrade_policy_manager.port | int |
8080
|
Optionally specify the port that the container will listen on |
upgrade_policy_manager.priorityClassName | object |
{}
|
Optionally specify the priority class for the pod. Defines the priority of the pod relative to other pods |
upgrade_policy_manager.readinessProbe | object |
{}
|
Optionally specify readiness probes for the container. Defines the probe to check if the container is ready to accept traffic |
upgrade_policy_manager.replicas | int |
1
|
Optionally specify the number of replicas for the deployment. Defines how many instances of the application will be run |
upgrade_policy_manager.resources | object |
{
"limits": {
"memory": "512Mi"
},
"requests": {
"memory": "256Mi"
}
}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
upgrade_policy_manager.resources.limits | object |
{
"memory": "512Mi"
}
|
Optionally specify the memory limits for the container. Defines the maximum amount of memory the container is allowed to use |
upgrade_policy_manager.resources.requests | object |
{
"memory": "256Mi"
}
|
Optionally specify the memory requests for the container. Defines the minimum amount of memory the container needs |
upgrade_policy_manager.schedulerName | object |
{}
|
Optionally specify the scheduler name for the pod. Defines which scheduler will be used to schedule the pod |
upgrade_policy_manager.serviceAccountName | object |
{}
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
upgrade_policy_manager.startupProbe | object |
{}
|
Optionally specify startup probes for the container. Defines the probe to check if the container has started successfully |
upgrade_policy_manager.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the termination grace period for the pod. Defines the time to wait before forcefully terminating the pod |
upgrade_policy_manager.tolerations | list |
[]
|
Optionally specify tolerations for scheduling the pods. Defines how the pods tolerate node taints |
upgrade_policy_manager.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for the pod. Defines constraints for spreading pods across nodes or other topological domains |
upgrade_policy_manager.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment. Defines how updates to the deployment are applied |
wrc_endpoint_secret | object | object |
Specify configuration for wrc_endpoint_secret Secret |
wrc_endpoint_secret.data | object |
{
"password": "admin",
"username": "admin"
}
|
Optionally specify the data for the Kubernetes Secret This must match `rest_service.config.manager.security.admin_username` and `rest_service.config.manager.security.admin_password` values or aliases |
wrc_endpoint_secret.data.password | string |
"admin"
|
Optionally specify the password for the Kubernetes Secret. Must match `rest_service.config.manager.security.admin_password` |
wrc_endpoint_secret.data.username | string |
"admin"
|
Optionally specify the username for the Kubernetes Secret. Must match `rest_service.config.manager.security.admin_username` |
wrc_endpoint_secret.labels | object |
{
"app": "conductor"
}
|
Optionally specify the labels to add to the Kubernetes Secret. Labels are key-value pairs that can be used to organize and select resources |
wrc_endpoint_secret.labels.app | string |
"conductor"
|
Optionally specify a label to categorize the Secret. In this case, it categorizes the Secret under the "conductor" application |
wrc_endpoint_secret.stringData | object |
{
"apiVersion": "v3.1",
"tenant": "default_tenant",
"trustAll": true
}
|
Optionally specify additional string data for the Kubernetes Secret. This allows specifying data that is stored as string values |
wrc_endpoint_secret.stringData.apiVersion | string |
"v3.1"
|
Optionally specify the API version for the Kubernetes Secret. Defines the version of the API that the application will use |
wrc_endpoint_secret.stringData.tenant | string |
"default_tenant"
|
Optionally specify the tenant for the Kubernetes Secret. This can be used to define a default tenant or other configuration specific to the application |
wrc_endpoint_secret.stringData.trustAll | bool |
true
|
Optionally specify whether to trust all certificates. Useful for environments where certificate validation is not required |
wrc_endpoint_secret.type | string |
"Opaque"
|
Optionally specify the type of the Kubernetes Secret. "Opaque" is used for generic secrets |
wrc_secret | object | object |
Specify configuration for wrc_secret component |
wrc_secret.affinity | object |
{}
|
Optionally specify the affinity rules for scheduling the pods. Defines rules for pod placement based on node characteristics |
wrc_secret.containerSecurityContext | object |
{
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": [
"ALL"
]
},
"enabled": true,
"runAsNonRoot": true,
"seccompProfile": {
"type": "RuntimeDefault"
}
}
|
Optionally specify the security context for the container. Defines security-related settings for the container |
wrc_secret.containerSecurityContext.allowPrivilegeEscalation | bool |
false
|
Optionally specify if privilege escalation is allowed. If false, privilege escalation is not allowed |
wrc_secret.containerSecurityContext.capabilities | object |
{
"drop": [
"ALL"
]
}
|
Optionally specify the Linux capabilities |
wrc_secret.containerSecurityContext.capabilities.drop | list |
[
"ALL"
]
|
Optionally specify the Linux capabilities to drop for the container |
wrc_secret.containerSecurityContext.enabled | bool |
true
|
Optionally specify if the security context is enabled. If true, security settings will be applied |
wrc_secret.containerSecurityContext.runAsNonRoot | bool |
true
|
Optionally specify if the container should run as a non-root user. If true, the container will not run as the root user |
wrc_secret.containerSecurityContext.seccompProfile | object |
{
"type": "RuntimeDefault"
}
|
Optionally specify the seccomp profile |
wrc_secret.containerSecurityContext.seccompProfile.type | string |
"RuntimeDefault"
|
Optionally specify the seccomp profile type to be used for the container |
wrc_secret.deploymentAnnotations | object |
{}
|
Optionally specify annotations for the deployment. Key-value pairs that can be used to attach metadata to the deployment |
wrc_secret.deploymentLabels | object |
{}
|
Optionally specify labels for the deployment. Key-value pairs that can be used to categorize and select deployments |
wrc_secret.hostAliases | object |
{}
|
Optionally specify host aliases for the pod. Defines additional hostnames and IP addresses to be added to the pod’s /etc/hosts file |
wrc_secret.image | string |
"(registry-link)/wrc-secret-operator:latest"
|
Specify the container image for the deployment. This is the image that will be used to create the container |
wrc_secret.imagePullPolicy | string |
"IfNotPresent"
|
Optionally specify the image pull policy for the deployment. Determines when the container image should be pulled |
wrc_secret.lifecycle | object |
{}
|
Optionally specify lifecycle hooks for the container. Defines actions to be taken at specific points in the container lifecycle |
wrc_secret.livenessProbe | object |
{}
|
Optionally specify liveness probes for the container. Defines the probe to check if the container is alive |
wrc_secret.nodeSelector | object |
{}
|
Optionally specify node selectors for scheduling the pods. Defines node labels that the pods must match |
wrc_secret.podAnnotations | object |
{}
|
Optionally specify annotations for the pod. Key-value pairs that can be used to attach metadata to the pod |
wrc_secret.podLabels | object |
{}
|
Optionally specify labels for the pod. Key-value pairs that can be used to categorize and select pods |
wrc_secret.podSecurityContext | object |
{}
|
Optionally specify the pod security context. Defines security-related settings for the pod |
wrc_secret.port | int |
8080
|
Optionally specify the port that the container will listen on |
wrc_secret.priorityClassName | object |
{}
|
Optionally specify the priority class for the pod. Defines the priority of the pod relative to other pods |
wrc_secret.readinessProbe | object |
{}
|
Optionally specify readiness probes for the container. Defines the probe to check if the container is ready to accept traffic |
wrc_secret.replicas | int |
1
|
Optionally specify the number of replicas for the deployment. Defines how many instances of the application will be run |
wrc_secret.resources | object |
{
"limits": {
"memory": "512Mi"
},
"requests": {
"memory": "256Mi"
}
}
|
Optionally specify the resource requests and limits for the container. Defines the amount of CPU and memory the container is guaranteed and allowed to use |
wrc_secret.resources.limits | object |
{
"memory": "512Mi"
}
|
Optionally specify the memory limits for the container. Defines the maximum amount of memory the container is allowed to use |
wrc_secret.resources.requests | object |
{
"memory": "256Mi"
}
|
Optionally specify the memory requests for the container. Defines the minimum amount of memory the container needs |
wrc_secret.schedulerName | object |
{}
|
Optionally specify the scheduler name for the pod. Defines which scheduler will be used to schedule the pod |
wrc_secret.serviceAccountName | object |
{}
|
Optionally specify the name of the service account for the deployment. Defines which service account will be used by the pods |
wrc_secret.startupProbe | object |
{}
|
Optionally specify startup probes for the container. Defines the probe to check if the container has started successfully |
wrc_secret.terminationGracePeriodSeconds | object |
{}
|
Optionally specify the termination grace period for the pod. Defines the time to wait before forcefully terminating the pod |
wrc_secret.tolerations | list |
[]
|
Optionally specify tolerations for scheduling the pods. Defines how the pods tolerate node taints |
wrc_secret.topologySpreadConstraints | object |
{}
|
Optionally specify topology spread constraints for the pod. Defines constraints for spreading pods across nodes or other topological domains |
wrc_secret.updateStrategy | object |
{}
|
Optionally specify the update strategy for the deployment. Defines how updates to the deployment are applied |