Preparing your environment Innovation Release
- Hybrid Manager dual release strategy
- Documentation for the current Long-term support release
Overview
Role focus: Site reliability engineer (SRE) / Infrastructure Engineer
Prerequisites
- Phase 1: Planning your architecture (Completed)
- Phase 2: Gathering your system requirements (Completed)
- Phase 3: Deploying your Kubernetes cluster (Completed & Running)
Outcomes
A Kubernetes cluster configured with necessary secrets (TLS, Auth, Storage, other necessary secrets)
Any chosen advanced features configured (IDP, multi-location, KMS for TDE)
A finalized and validated
values.yamlconfiguration file ready for Hybrid Manager (HM) installation
Note
EDB Support Context: Final preparation for Hybrid Manager is primarily the customer's responsibility (except for Sovereign Systems), as is the cluster's lifecycle operation. Professional Services can be engaged via a Statement of Work (SoW), and Support can offer assistance through knowledge base articles.
Next phase: Phase 5: Install Hybrid Manager
Phase 4 guide
Now that your Kubernetes cluster is running, you must prepare it for the HM platform.
This involves establishing administrative access through a remote management workstation, staging required artifacts and secrets into the cluster, and translating your architectural decisions into the final configuration file.
Preparing your environment, Phase 4, covers:
Creating necessary infrastructure secrets and implementing TLS:
Creating other necessary secrets
- Creating GenAI Builder secret - requires Fernet secret and NGC key secrets.
- Creating Catalog secret - requires confounding key secret.
- Customizing secrets for the Migration Portal - requires secrets for service accounts.
Configuring advanced features:
Defining Multi-location logic (SPIRE federation and object storage sync)
Constructing complex YAML configurations for your Identity Provider.
Completing environment preparation:
- Configuring the internal backup folder and securing the mandatory User-0 admin account.
Finalizing the cluster configuration (
values.yaml):- Assembling all the relevant above inputs into the final Helm chart
values.yamlfile.
- Assembling all the relevant above inputs into the final Helm chart
Validating the configuration file and runtime:
- Validating the
values.yamlfile and runtime environment, so that they are ready for HM installation.
- Validating the
Creating necessary secrets
Warning
The following are not optional configurations. These preparatory steps must be completed in order to successfully complete your installation. This configurations do not require you utilize the associated features; it only prepares a secure implementation in case it's needed.
Create Kubernetes secrets for any required credentials, such as object storage credentials, database access tokens, or any other sensitive information.
Create Image Pull secret
Warning
It is necessary to create an Image Pul secret for HM to install successfully.
Dependencies
- k8s secret
edb-cred(defaults toedb-credinvalues.yaml)
Use edbctl to create the ImagePullSecret namespace and required secrets.
Create the necessary namespaces and pull secrets:
edbctl image-pull-secret create \ --username <container registry username> \ --password <container registry passowrd> \ --registry <local registry URI>
When prompted with
Proceed? [y/N]with the current Kubernetes context, selecty.You should see output similar to:
2025/02/10 10:10:10 Creating Kubernetes Namespaces and ImagePullSecrets with the provided credentials... 2025/02/07 15:29:08 Namespaces and ImagePullSecrets creation completed
Verify the secret creation:
edbctl image-pull-secret list
Output example:
Current Kubernetes context is: <your-KubeContext> Namespace edbpgai-bootstrap: exists, all set! Secret edb-cred: exists, all set! Namespace upm-replicator: exists, all set! Secret edb-cred: exists, all set!
Spot validation
Check that the
edb-credsecret exists in the target namespace:kubectl get secret edb-cred -n edbpgai-bootstrap
Check that the default service account references the
edb-credsecret:kubectl get serviceaccount default -n edbpgai-bootstrap -o yaml | grep edb-cred
Deploy a test pod (lasso) to validate image pull:
kubectl run lasso \ --rm -it \ --image=https://docker.enterprisedb.com/pgai-platform/lasso:latest \ --restart=Never \ -n edbpgai-bootstrap \ --image-pull-policy=Always \ -- bash
(Optional) Describe the pod to confirm imagePullSecrets are set:
kubectl describe pod lasso -n edbpgai-bootstrap | grep -i imagepull
Create object storage secret
Warning
It is necessary to create an object storage secret for HM to install successfully. Your object storage must be dedicated entirely to the HM. Any prior data existing in your object storage interferes with the successful operation of the HM. Similarly, if HM is removed and reinstalled, then the object storage must be emptied prior to beginning the new installation.
To implement the object storage requirement, you must create a secret named edb-object-storage in the default namespace.
Select the configuration matching your provider:
AWS IAM (EKS/ROSA)
apiVersion: v1 kind: Secret metadata: name: edb-object-storage # the name cannot be changed namespace: default # the namespace cannot be changed stringData: auth_type: workloadIdentity aws_region: <AWS_BUCKET_REGION> aws_role_arn: <PRIMARY_IDENTITY_ROLE_ARN> bucket_name: <AWS_BUCKET_NAME> secondary_role_arn: <SECONDARY_IDENTITY_ROLE_ARN> secondary_role_external_id: <SECONDARY_IDENTITY_EXTERNAL_ID>
AWS / Other K8s (Static Credentials)
apiVersion: v1 kind: Secret metadata: name: edb-object-storage namespace: default stringData: auth_type: credentials aws_region: <AWS_BUCKET_REGION> bucket_name: <AWS_BUCKET_NAME> aws_access_key_id: <AWS_ACCESS_KEY_ID> aws_secret_access_key: <AWS_SECRET_ACCESS_KEY> secondary_role_arn: <SECONDARY_IDENTITY_ROLE_ARN> secondary_role_external_id: <SECONDARY_IDENTITY_EXTERNAL_ID>
Azure Blob Storage
apiVersion: v1 kind: Secret metadata: name: edb-object-storage namespace: default stringData: provider: azure subscription_id: <AZURE_SUBSCRIPTION_ID> resource_group_name: <AZURE_RESOURCE_GROUP> storage_account_name: <AZURE_STORAGE_ACCOUNT> storage_account_container_name: <AZURE_STORAGE_CONTAINER> storage_account_key: <AZURE_STORAGE_KEY> region: <AZURE_REGION> client_id: <AZURE_CLIENT_ID> client_secret: <AZURE_CLIENT_SECRET> tenant_id: <AZURE_TENANT_ID>
GCP Object Storage
apiVersion: v1 kind: Secret metadata: name: edb-object-storage namespace: default stringData: provider: gcp location_id: <GCP_BUCKET_REGION> project_id: <GCP_PROJECT_ID> bucket_name: <GCP_BUCKET_NAME> credentials_json_base64: <GCP_CREDENTIAL_BASE64>
Other S3 Compatible Storage
apiVersion: v1 kind: Secret metadata: name: edb-object-storage namespace: default stringData: auth_type: credentials # Optional: Base64 CA bundle if not using a well-known CA aws_ca_bundle_base64: <CA_BUNDLE_BASE64> aws_endpoint_url_s3: <S3_ENDPOINT_URL> aws_access_key_id: <AWS_ACCESS_KEY_ID> aws_secret_access_key: <AWS_SECRET_ACCESS_KEY> bucket_name: <S3_BUCKET_NAME> aws_region: <S3_REGION> # Set to true if server-side encryption is disabled on the bucket server_side_encryption_disabled: <true|false>
Creating other necessary secrets
The following configurations are scenario-dependent.
If using the
scenariosparameter: These preparatory steps are mandatory only if your selectedscenariosrequire them. If a required secret is missing for a chosen scenario, the installation will fail.If NOT using the scenarios parameter: By default, all scenarios are enabled. In this case, all secrets are required to successfully complete the installation.
Creating GenAI Builder secrets
GenAI Builder allows you to build AI agents.
Note
Required for deployments with the ai installation scenario enabled. This scenario is included by default unless it is manually excluded via the spec.scenarios parameter in values.yaml.
One of the secrets you will create require an NGC API key. Create one following the NVIDIA NGC documentation to enable model image pulls.
You can create a Fernet key secret and NGC key secrets using the
edbctlCLI.For manual installations, run this command and follow the interactive prompts:
edbctl setup create-install-secrets --version <version> --scenario ai
If you are running the installation via a CI/CD pipeline, you must suppress interactive prompts. The method for achieving this depends on your
edbctlversion:Ensure the NGC API key you created above is available as a variable (
export EDB_AI_NVIDIA_NGC=<your-ngc-api-key>).Use the
--non-interactiveflag to suppress confirmation prompts:edbctl setup create-install-secrets --version <version> --scenario ai --non-interactive
In version 1.3.0, the
--non-interactiveflag was replaced by global configuration settings. You must disable bothinteractive_modeandconfirm_modefor fully unattended execution.Ensure the NGC API key you created above is available as a variable.
Configure
edbctlfor non-interactive behavior:edbctl config set interactive_mode off edbctl config set confirm_mode off
Run the setup command:
edbctl setup create-install-secrets --version <version> --scenario ai
This creates the Fernet key secret, as well as
nvidia-nim-secretsandngc-credsecrets in thedefaultnamespace with the appropriate replication annotations.Note
Fernet is a cryptographic library used by Python. It provides symmetric encryption/decryption and is required to store secret data.
The HM administrator must keep the Fernet key safe and back it up. The secret name and namespace depend on the version of Hybrid Manager you are running.
For versions 2026.2 and earlier, the solution is based on Griptape. Use the following command to retrieve the secret:
kubectl get secret -n upm-griptape fernet-secret -o yaml
For versions 2026.3 and later, the solution shifted to LangFlow. The secret is now stored in the default namespace:
kubectl get secret langflow-secret -n default -o yaml
Store the key safely.
Configure DataLake object storage for GenAI builder by creating a DataLake bucket in the object storage you're using for your Hybrid Manager deployment. GenAI Builder uses it to store structures, tools, and indexed data.
aws s3 mb s3://<your-datalake-bucket-name> –region <your-region>
gsutil mb -l <your-region> gs://<your-datalake-bucket-name>
Use your provider’s management console or CLI to create a bucket with a unique name for your DataLake bucket.
Capture the following information for your bucket. You will need it later when you first use the GenAI launchpad application. The console will prompt you for your DataLake bucket configuration, which requires:
DATA_LAKE_ROOT_BUCKET: The name of the bucket for use with DataLake.DATA_LAKE_S3_ACCESS_KEY: The access_key used to connect to the DataLake bucket.DATA_LAKE_S3_SECRET_ACCESS_KEY: The secret_access_key used to connect to the DataLake bucket.DATA_LAKE_S3_ENDPOINT_UR: The endpoint URL used to connect to the DataLake bucket.
Update the bucket's settings with the CORS configuration:
Update the bucket's settings to have the following CORS configuration:
[ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "PUT", "POST", "DELETE", "GET", "HEAD" ], "AllowedOrigins": [ "https://${PORTAL_DOMAIN_NAME}" ], "ExposeHeaders": [] } ]Where
https://${PORTAL_DOMAIN_NAME}is the domain configured for your Hybrid Manager.The S3 interoperability layer in GCS allows GenAI Builder to use GCS as an S3-compatible object store.
In the GCS console, under Settings, turn on the s3 interoperability.
Update or create a service account with the Storage Admin and Service Account Token Creator roles.
Create an HMAC key pair for the service account.
Create a config file with a CORS configuration that points at the Hybrid Manager endpoint:
cat cors-config.json [ { "origin": ["https://${PORTAL_DOMAIN_NAME}"], "method": ["GET", "PUT", "POST", "DELETE", "HEAD"], "responseHeader": ["*"], "maxAgeSeconds": 3600 } ]Where
https://${PORTAL_DOMAIN_NAME}is the domain configured for your Hybrid Manager.Apply the CORS configuration to the previously created bucket:
gsutil cors set cors-config.json gs://<bucket name>
Use your provider’s management console or CLI to configure cross-origin resource sharing (CORS) with Hybrid Manager.
Creating Catalog secret
The Catalog secret is required to create encrypted storage credentials and manage Lakehouse data.
Note
Required for deployments with the analytics installation scenario enabled. This scenario is included by default unless it is manually excluded via the spec.scenarios parameter in values.yaml.
You can create a confounding key secret with the
edbctlCLI for environments with theanalyticsscenario enabled.For manual installations, run this command and follow the interactive prompts:
edbctl setup create-install-secrets --version <version> --scenario analytics
If you are running the installation via a CI/CD pipeline, you must suppress interactive prompts. The method for achieving this depends on your
edbctlversion:Use the
--non-interactiveflag to suppress confirmation prompts:edbctl setup create-install-secrets --version <version> --scenario analytics --non-interactive
In version 1.3.0, the
--non-interactiveflag was replaced by global configuration settings. You must disable bothinteractive_modeandconfirm_modefor fully unattended execution.Configure
edbctlfor non-interactive behavior:edbctl config set interactive_mode off edbctl config set confirm_mode off
Run the setup command:
edbctl setup create-install-secrets --version <version> --scenario analytics
Note
- A confounding key is a randomized string that's at least 32 bytes long.
- Create a confounding key for each Hybrid Manager deployment.
The Hybrid Manager administrator must keep the confounding key safe and back it up.
Warning
The loss of the confounding key in a disaster scenario leads to a situation in which there's no mechanism for accessing the Lakehouse data managed by the Hybrid Manager data catalog. Instead, the administrator would have to create and store the new key, restart the
upm-lakekeeper/lakekeeperworkload, and rebuild all of the existing data catalogs carefully without deleting them. That procedure is very risky and would require support from EDB PG AI Professional Services team.Fetch the key:
kubectl get secrets -n upm-lakekeeper pg-confounding-key -o yaml
Store the key safely.
Customizing Migration Portal secrets
Note
Required for deployments with the migration installation scenario enabled. This scenario is included by default unless it is manually excluded via the spec.scenarios parameter in values.yaml.
Migration Portal uses several secrets for internal component communication. While you can deploy Hybrid Manager with the default secrets for a quick, functioning installation, for production environments we strongly recommend creating custom secrets.
Create custom secrets required for the Migration Portal:
For manual installations, run this command and follow the interactive prompts:
edbctl setup create-install-secrets --version <version> --scenario migration
If you are running the installation via a CI/CD pipeline, you must suppress interactive prompts. The method for achieving this depends on your
edbctlversion:Use the
--non-interactiveflag to suppress confirmation prompts:edbctl setup create-install-secrets --version <version> --scenario migration --non-interactive
In version 1.3.0, the
--non-interactiveflag was replaced by global configuration settings. You must disable bothinteractive_modeandconfirm_modefor fully unattended execution.Configure
edbctlfor non-interactive behavior:edbctl config set interactive_mode off edbctl config set confirm_mode off
Run the setup command:
edbctl setup create-install-secrets --version <version> --scenario migration
This creates the namespaces, custom secrets, and generates secure random passwords for all five service accounts:
edb-migration-portal.db_secretsedb-migration-portal.db_superuser_secretsedb-migration-portal.copilot_secretsedb-migration-copilot.db_secretsedb-migration-copilot.metrics_auth_secrets
For more information on Migration Portal secrets, see Customizing Migration Portal secrets for secure internal communication.
Implement TLS
You must implement the TLS strategy chosen during the Gathering your system requirements phase, some of which require secrets to be created.
Option A: Custom cert-manager issuer
- Use case: You have an existing
ClusterIssuerorIssuer(e.g., Let's Encrypt, HashiCorp Vault, or a corporate CA) running in the cluster.
Note
For this option, you do not need to pass a secret to HM. The existing Issuer already manages its own credentials.
Dependencies
Ensure your existing
ClusterIssuerorIssueris configured and available in the target cluster.Specify the issuer name and kind in
values.yaml:parameters.global.portal_certificate_issuer_kind:<ClusterIssuer or Issuer>portal_certificate_issuer_name:<my_issuer>
Option B: Custom certificate authority (CA)
- Use case: You want HM to create an cert-manager issuer for you and use your own corporate CA to sign certificates.
Dependencies
Set up your own custom CA.
Create a Kubernetes secret for
<ca_secret_name>containing your CA signing keypair (cert and key).Specify the
<ca_secret_name>in thevalues.yaml:parameters.global.ca_secret_name:(<ca_secret_name).
Option C: Custom certificate
Use case: You have your own x.509 certificate and would like to change it out for the default self-signed HM certificate
Dependencies
Create a Kubernetes secret for
<my_portal_certificate>including an export of the entire certificate chain in the public certificate file and the unencrypted private key.Specify the
<my_portal_certificate>in thevalues.yaml:parameters.global.portal_certificate_secret:<my_portal_certificate>.SOP: Set up a custom x.509 certificate for the Hybrid Manager Portal
Option D: Self-signed certificates (Default)
- Use case: Non-production testing.
HM automatically generates self-signed certificates by default.
Dependencies
N/A
Configuring advanced features
Configuring multi-location architecture
Deploying HM across multiple locations (multi-DC) requires significant preparation to ensure the primary and secondary clusters can communicate securely.
You must establish trust domains, sync storage secrets, and configure the HM-internal beacon agent to register the secondary location.
Preparation checklist:
- Network: Ensure connectivity on ports
8444(SPIRE) and9445(Beacon) between clusters. - Storage: Synchronize the
edb-object-storagesecret across all clusters. - Identity: Configure SPIRE federation to allow cross-cluster trust.
- Configuration: Define unique
beacon_location_idvalues and trust domains invalues.yaml.
- SOP: Configuring multiple data centers for Hybrid Manager
- Guidance: Detailed steps for setting up SPIRE federation and cross-DC wiring.
Configuring identity providers (IdP)
Action: Construct the idpConnectors configuration block.
Configuring an IdP requires constructing a complex array in your values.yaml.
You must gather specific values from your IdP provider (Okta, Active Directory, etc.) to populate the portal.authentication.idpConnectors section.
- SOP: Configuring your own identity provider
- Guidance: Detailed YAML examples for LDAP and SAML integration.
Implementing KMS for TDE
If you require encryption at rest for your databases (TDE), you should enable the Key Management Service (KMS) provider and create your keys now.
Depending on the KMS you are using, refer to the guides below to identify the necessary values.yaml flags and create the keys.
Implement the configuration strategies decided upon during the Phase 1: Planning your architecture and Gathering your system requirements phases.
Completing environmental preparation
Set up an internal backup folder
This is a universally unique folder name for the HM backups for disaster recovery.
Dependencies
values.yamlparameterglobal.internal_backup_folder.
Change mandatory static User-0 password
Dependencies
You must configure the following four values under pgai.portal.authentication.staticPasswords in your values.yaml:
userID: Must be set toc5998173-a605-449a-a9a5-4a9c33e26df7.username: The login username for the admin account.email: The email address associated with the admin account.hash: The bcrypt hash of your desired password.
Use a hash for your new password
To generate the hash for your new password:
Option 1:
bcrypt hash of the string "password": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2)
Option 2:
echo 'password' | htpasswd -BinC 10 admin | cut -d: -f2
Finalizing the cluster configuration (values.yaml)
You now construct (or update if you have previously created) the final values.yaml file.
This file references your system requirements (Storage, Network, etc), environment settings, and the feature configurations you just prepared.
Below is a template configuration with all of the keys you should have determined for a successful HM install:
system: <Kubernetes> bootstrapImageName: <Container Registry Domain>/pgai-platform/edbpgai-bootstrap/bootstrap-<Kubernetes> bootstrapImageTag: <Version> containerRegistryURL: "<Container Registry Domain>/pgai-platform" parameters: global: internal_backup_folder: <twelveCharacterString> portal_domain_name: <Portal Domain> storage_class: <Block Storage> portal_certificate_issuer_kind: <ClusterIssuer> portal_certificate_issuer_name: <my-issuer> trust_domain: <Portal Domain> upm-beacon: beacon_location_id: <Location> server_host: <Agent Domain> transporter-rw-service: domain_name: <Migration Domain> transporter-dp-agent: rw_service_url: https://<Migration Domain>/transporter beaconAgent: provisioning: imagesetDiscoveryAuthenticationType: <Authentication Type for the Container Registry> imagesetDiscoveryContainerRegistryURL: "<Container Registry Domain>/pgai-platform" transparentDataEncryptionMethods: - <available_encryption_method> pgai: portal: authentication: idpConnectors: - config: caData: <base64 encyrption of Certificate Authority from SSO provider> emailAttr: email groupsAttr: groups entityIssuer: https://<Portal Domain>/auth/callback redirectURI: https://<Portal Domain>/auth/callback ssoURL: [https://login.microsoft.com/](https://login.microsoft.com/)<azure service identifier>/saml2 usernameAttr: name id: azure name: Azure type: saml staticPasswords: - email: <email> hash: <hashed_password> userID: c5998173-a605-449a-a9a5-4a9c33e26df7 username: <email-or-username> resourceAnnotations: - name: istio-ingressgateway kind: Service annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: internal service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/8 spec: scenarios: - core - <if required, migration> - <if required, ai> - <if required, analytics>
Validating the configuration file and runtime
Before proceeding to deployment, validate both your configuration file and the runtime environment.
Validate configuration syntax
Ensure your values.yaml is valid and can be processed by Helm.
Replace <your-registry-url> with the registry you defined in values.yaml (e.g., docker.enterprisedb.com or your internal Harbor/Artifactory).
helm template edb-pgai oci://<your-registry-url>/pgai-platform/edb-pgai \ --values values.yaml \ --version <target-version> > /dev/null
If this command completes without error, your YAML syntax is valid.
Validate secrets
Ensure the secrets referenced in your configuration exist in the cluster.
kubectl get secrets -n defaultCheck for: edb-object-storage, edb-cred, and your TLS secrets
Validate LoadBalancer
Verify that your cluster can provision an external load balancer service (required for ingress).
kubectl create service loadbalancer test-lb --tcp=80:80 kubectl get svc test-lb
- Success: If the load balancer controller is functioning, the test-lb service will be assigned an external IP or hostname.
- Cleanup:
kubectl delete svc test-lb
Run diagnostic tooling
For a comprehensive check of the cluster's readiness, use the diagnostic plugin.
Next phase
Now that your management workstation is ready, images are synced, secrets are created, advanced features are configured, values.yaml is finalized, and then that YAML is validated, you are ready to install HM.