Configure Ingester TLS¶
This guide covers exposing the ingester gRPC server over TLS via an OpenShift Route, allowing collector agents on managed clusters to connect securely over the network.
Prerequisites¶
- OpenShift 4.x cluster with
service-caoperator - ClusterPulse deployed on the hub cluster
ocCLI authenticated with cluster-admin
Architecture¶
The ingester uses passthrough TLS termination on the OpenShift Route. The HAProxy router forwards raw TCP based on SNI, and the ingester terminates TLS itself using a certificate generated by OpenShift's service-ca operator.
Since the service-ca cert has SANs for in-cluster names only (e.g., clusterpulse-ingester.clusterpulse.svc), but collectors connect via the route hostname (e.g., clusterpulse-ingester-clusterpulse.apps.example.com), the collector uses a VerifyConnection callback to verify the certificate against the in-cluster service name while keeping the route hostname as SNI for passthrough routing. The controller computes the in-cluster service FQDN automatically and passes it to collectors via INGESTER_TLS_SERVER_NAME.
Why not re-encrypt? OpenShift's HAProxy uses HTTP/1.1 for backend connections on re-encrypt routes. gRPC requires HTTP/2 end-to-end, so only passthrough is supported. It's possible to get it working in the future, but I've had issues.
Operator (Recommended)¶
Before starting, it is necessary to create a SA on the managed cluster that has READER on all necessary resources + ability to create deployments and other resources in the clusterpulse-system NS
Step 1: Enable on ClusterPulse CR¶
If you are using the ClusterPulse operator, enable the ingester and TLS directly in the CR:
The operator automatically provisions the Service (with service-ca annotation), Route (passthrough TLS), CA ConfigMap, TLS volume mount, and all required environment variables including INGESTER_TLS_SERVER_NAME. No manual manifests needed.
Step 2: Get the Route Hostname¶
ROUTE_HOST=$(oc get route <release_name>-ingester -n clusterpulse -o jsonpath='{.spec.host}')
echo $ROUTE_HOST
Step 3: Configure ClusterConnections¶
Set the ingesterAddress on each push-mode ClusterConnection to the Route hostname on port 443:
apiVersion: clusterpulse.io/v1alpha1
kind: ClusterConnection
metadata:
name: managed-cluster-1
namespace: clusterpulse
spec:
collectionMode: push
ingesterAddress: "<route-host>:443"
# ... other fields
The controller will automatically:
1. Copy the ingester-ca ConfigMap to the managed cluster
2. Mount it in the collector Deployment
3. Set INGESTER_TLS_ENABLED=true, INGESTER_TLS_CA, and INGESTER_TLS_SERVER_NAME on the collector
Verification¶
Test the Route with OpenSSL:
Check collector logs on the managed cluster:
Look for Using custom TLS server name for certificate verification to confirm server name override is active.
Environment Variables¶
| Variable | Component | Default | Description |
|---|---|---|---|
INGESTER_TLS_ENABLED |
Manager + Collector | false |
Enable TLS on the ingester |
INGESTER_TLS_CERT |
Manager | /etc/ingester-tls/tls.crt |
Path to serving certificate |
INGESTER_TLS_KEY |
Manager | /etc/ingester-tls/tls.key |
Path to serving private key |
INGESTER_SERVICE_NAME |
Manager | clusterpulse-ingester |
Ingester Service name, used to derive INGESTER_TLS_SERVER_NAME |
INGESTER_TLS_USE_SYSTEM_CA |
Manager | false |
Use system CA trust store on collectors (skip CA distribution) |
COLLECTOR_CA_CONFIGMAP |
Manager | ingester-ca |
Name of the CA ConfigMap to distribute (only when useSystemCA=false) |
COLLECTOR_CA_NAMESPACE |
Manager | (release namespace) | Namespace of the CA ConfigMap |
COLLECTOR_CA_KEY |
Manager | service-ca.crt |
Key within the ConfigMap containing the CA certificate |
INGESTER_TLS_CA |
Collector | (empty) | Path to CA certificate. When empty, system CAs are used. |
INGESTER_TLS_SERVER_NAME |
Collector | (empty) | Override hostname for certificate verification. Set automatically by the controller. |
Custom CA Mode¶
If you use a custom CA (e.g., cert-manager) instead of the OpenShift service-ca, point the controller at your CA ConfigMap:
Via ClusterPulse CR (Recommended)¶
spec:
clusterEngine:
ingester:
enabled: true
tls:
enabled: true
customCAConfigMap:
name: my-ca-bundle # ConfigMap name on the hub
namespace: my-namespace # Defaults to release namespace if omitted
key: ca.crt # Key containing the CA certificate
route:
enabled: true
System CA Mode¶
If your ingester uses a certificate signed by a publicly-trusted or enterprise-distributed CA (not service-ca), set useSystemCA: true in the CR or INGESTER_TLS_USE_SYSTEM_CA=true on the controller. Collectors will use the system trust store and no CA ConfigMap is distributed.
Non-OpenShift Clusters¶
For non-OpenShift environments, you can provide your own TLS certificate (e.g., via cert-manager):
- Create a Secret named
ingester-serving-certwithtls.crtandtls.key - Create a ConfigMap with the CA certificate and configure
customCAConfigMapin the CR
Notes¶
- Cert rotation: For v1, pod restart is required on cert change. Future versions may use
tls.Config.GetCertificatewith a file watcher. - Port mapping: Route traffic arrives on port 443 externally, proxied to port 9443 on the Service.
- TLS server name: The
VerifyConnectioncallback usesInsecureSkipVerifywith manual chain verification against the in-cluster service name. This is a well-established Go TLS pattern explicitly supported by the standard library ("If InsecureSkipVerify is set then normal verification is skipped but VerifyConnection is still called") and used in production by Google S2A (SPIFFE verification), SPIFFE go-spiffe (workload identity), Let's Encrypt Boulder (cert probing), and Kubernetes apimachinery (proxied connections). Full certificate chain validation occurs — just against the service FQDN rather than the route hostname.