Required Permissions and Configurations
Overview
This section outlines the necessary permissions and configurations required to operate Wind River Conductor (WRC) and perform upgrades, particularly in environments where multiple instances of WRC are installed on the same host.
Required Permissions
Conductor Access
WRC Operations and Upgrades can be managed through:
- Conductor UI (with widget)
- Conductor CLI
- kubectl
- REST API
UI Access
- The Conductor UI enables the management of users and groups for each tenant.
- No specific configuration is currently required or available at the cluster level to segregate users within the UI.
kubectl Access
- Kubernetes RBAC (Role-Based Access Control) can be used to restrict access by namespace.
- Cluster administrators should create RBAC rules to ensure teams operate within their designated namespaces without impacting other users.
- Reference: Kubernetes
REST API Access
- Permissions for the REST API are managed by Kubernetes and Conductor.
CLI Access
- use the CLI locally from Python package as described in Installing the Conductor CLI.
Required Permissions for Upgrades
The upgrade process utilizes the following remote clients:
- kubectl
- helm
- cfy (Conductor CLI) These tools must be installed and configured on the host where the upgrade is initiated.
Role-Based Upgrade Permissions
- Only administrators are allowed to perform upgrades.
- If a user lacks access to kubectl, cfy and helm, they will be unable to perform an upgrade.
- Administrators must enforce security policies to prevent unauthorized access to exported Kubernetes configurations.
Configuration for Multi-Conductor Environments
Namespace Isolation and Multi-Instance Deployment
- To prevent hostname conflicts, disabling Ingress and using NodePort is recommended when deploying multiple instances on the same host.
- When working with multiple instances exposed via NodePort, ensure the
cfy profile
is set with the correct port corresponding to each NodePort configuration. - Proper firewall and networking policies should be applied to allow communication between instances. For more information on deploying multiple WRC instances, please refer to Configuring Multi-Tenancy sections.
NodePort and Network Configuration
- Each instance requires a unique NodePort configuration.
Upgrading WRC with Multiple Instances on the Same Host
-
Only administrators should have access to remote clusters to perform upgrades.
-
During upgrades, ensure that override-values.yaml is properly configured to match the target version and environment requirements.
-
Validate that the existing global network policy controller-oam-if-gnp is properly configured to allow necessary communications during the upgrade process.
-
The operator can check the current configuration of the policy with:
kubectl get globalnetworkpolicies controller-oam-if-gnp -o yaml
-
Instead of creating a new policy for port 80, the operator can edit the existing controller-oam-if-gnp policy to allow ingress on port 80 for a single instance. For example:
kubectl edit globalnetworkpolicies controller-oam-if-gnp
-
For multi-instance deployments using
NodePort
, no additional configuration is required, as the default Kubernetes range is already allowed by iptables.
-