GitOps
Securing GitOps Workflows with Sealed Secrets and SOPS
Master the art of managing sensitive credentials within public or private Git repositories without compromising security or auditability.
In this article
The Fundamental Conflict of GitOps Secrets
GitOps transforms the way we manage infrastructure by treating our version control system as the definitive source of truth. Every change to your environment starts with a pull request, providing a clear audit trail and making rollbacks as simple as reverting a commit. This paradigm works perfectly for declarative configurations like deployment replicas or service labels which are intended to be transparent.
However, this transparency creates a massive security paradox when we handle sensitive data like database passwords, API tokens, and TLS certificates. Storing these items in plain text within a repository exposes them to every developer with access to the code and leaves a permanent record in the commit history. Even if you delete a secret in a later commit, it remains accessible to anyone who traverses the previous versions of the repository.
Many teams attempt to use standard Kubernetes Secrets thinking they provide protection, but these objects are only base64 encoded rather than encrypted. Base64 is a data representation format, not a security mechanism, and anyone can decode these values in a single command. To maintain a secure GitOps workflow, we must find a way to keep our configurations public while keeping our credentials private.
The greatest risk in GitOps is not the automation itself, but the accidental exposure of sensitive credentials that remain embedded in the immutable history of your version control system.
Understanding the Mirage of Base64 Security
In a standard Kubernetes environment, a Secret resource holds sensitive data in a key-value format. When you look at the manifest, the values appear as scrambled strings, which often leads to a false sense of security. These strings are simply the result of a standard encoding algorithm that is designed for transport, not for secrecy.
If a developer commits a standard Secret manifest to a Git repository, they have essentially published their credentials to the world. Any person or automated system that clones the repository can pipe that string through a basic decoding tool to retrieve the original password. This violates the principle of least privilege by making secrets available to anyone with read access to the configuration code.
The Permanent Nature of Git History
Git is designed to never lose information, which is a feature for source code but a liability for sensitive data. Once a secret is committed and pushed to a remote server, it is distributed across every workstation that clones the repository. Removing the secret in a subsequent commit does not purge it from the underlying object database of the Git system.
Cleaning up an exposed secret requires rewriting the entire history of the repository using complex tools, which disrupts the workflow of the entire team. This high cost of remediation makes it vital to prevent plain-text secrets from ever entering the repository in the first place. We need a strategy that allows us to commit encrypted placeholders that only the cluster can understand.
Strategy One: Client-Side Encryption with Sealed Secrets
The first mature solution to the GitOps secret problem is the use of client-side encryption. In this model, developers encrypt their secrets locally using a public key provided by a controller running inside the Kubernetes cluster. The resulting encrypted object is a custom resource that is safe to store in a public or private Git repository.
When the GitOps controller, such as Argo CD or Flux, synchronizes the repository to the cluster, it applies this encrypted object. A specialized operator within the cluster then uses its private key to decrypt the data and generate a standard Kubernetes Secret. This ensures that the sensitive data is only ever readable by the cluster itself and is never exposed in the source control history.
1apiVersion: bitnami.com/v1alpha1
2kind: SealedSecret
3metadata:
4 name: database-credentials
5 namespace: production
6spec:
7 # The encryptedData field contains the encrypted version of the secret
8 # This was generated using the 'kubeseal' command-line tool
9 encryptedData:
10 password: AgBy3i4uA9VfIAV8/X/X7...very-long-encrypted-string...
11 template:
12 metadata:
13 name: database-credentials
14 namespace: production
15 type: OpaqueThis approach maintains the core GitOps principle of having a single source of truth in the repository. The developer workflow is slightly modified but remains largely the same, as they still interact primarily with YAML files. The main advantage is that the security boundary is moved to the cluster, where the private key is strictly guarded and never leaves the environment.
The Role of the Public Key Infrastructure
The security of the Sealed Secrets model relies on asymmetric encryption, involving a public key for encryption and a private key for decryption. The cluster operator generates this key pair upon installation and publishes the public key for all developers to use. This means developers do not need access to the production environment to prepare secrets for deployment.
Since the encryption is one-way from the developer's perspective, they can safely push the resulting manifest to a repository. Even if an attacker gains access to the repository, they cannot decrypt the data because they lack the private key stored inside the cluster. This creates a secure bridge between the development environment and the production runtime.
Strategy Two: External Secrets and Cloud Integrations
A second popular approach involves decoupling the secret storage from the Git repository entirely by using an External Secrets Operator. Instead of storing encrypted data in Git, you store your secrets in a dedicated vault like AWS Secrets Manager, HashiCorp Vault, or Google Secret Manager. Your Git repository then contains a reference to the location of the secret rather than the secret itself.
The External Secrets Operator runs inside your cluster and acts as a bridge between your cloud provider and Kubernetes. It monitors the external vault for changes and automatically synchronizes the values into your cluster as standard Kubernetes Secrets. This allows security teams to manage credentials in a centralized, highly audited environment while developers manage the application lifecycle via GitOps.
1apiVersion: external-secrets.io/v1beta1
2kind: ExternalSecret
3metadata:
4 name: api-service-secrets
5spec:
6 refreshInterval: 1h # How often to sync from the cloud vault
7 secretStoreRef:
8 name: aws-secrets-manager # Reference to the provider configuration
9 kind: SecretStore
10 target:
11 name: application-runtime-secret # The name of the resulting K8s Secret
12 data:
13 - secretKey: API_KEY
14 remoteRef:
15 key: /production/api-service/key # The path in the external vaultBy using this pattern, you gain several operational advantages including centralized auditing and easier secret rotation. Because the Git repository only contains metadata about where the secret lives, you can change the actual password in the vault without needing to perform a new Git commit or trigger a new deployment cycle.
Choosing Between Local and External Strategies
Selecting the right strategy depends on your existing infrastructure and organizational requirements. If your team is already using a managed cloud service for secret storage, the External Secrets Operator is often the most logical choice. It reduces the overhead of key management and provides a unified interface for all your application credentials.
On the other hand, if you are running on-premises or want to keep your dependencies to a minimum, the Sealed Secrets approach might be more appropriate. It requires no external services and keeps all configuration data, even if encrypted, within your Git repository. This can be beneficial for disaster recovery scenarios where you need to rebuild the entire environment from the repository alone.
Comparison of Secrets Management Approaches
When evaluating these two patterns, it is helpful to look at how they handle specific operational tasks. The following list outlines the major differences in their architecture and maintenance profiles.
- Key Management: Sealed Secrets requires you to manage a private key inside the cluster, while External Secrets delegates this to a cloud provider.
- Portability: Sealed Secrets are more portable across different cloud providers since they do not rely on a specific vendor API.
- Auditing: External Secret Managers provide detailed logs of who accessed a secret and when, which is harder to achieve with client-side encryption.
- Complexity: The External Secrets Operator requires configuring IAM roles and permissions between the cluster and the cloud provider.
Best Practices for Secret Rotation and Lifecycle
Managing the initial deployment of a secret is only the beginning of the journey. In a robust production environment, secrets must be rotated regularly to minimize the impact of a potential leak. Automated rotation is a key requirement for many compliance frameworks and high-security organizations.
When using the External Secrets Operator, rotation can be managed directly by the cloud provider. The operator will detect the new version of the secret in the vault and update the Kubernetes Secret automatically. This ensures that your applications are always using the most current credentials without manual intervention from the development team.
If you are using Sealed Secrets, rotation requires more manual effort because you must re-encrypt the new secret value and commit it to the repository. This creates a stronger link between the secret lifecycle and the application deployment lifecycle. It is important to have a clear process for these updates to prevent downtime during the transition between old and new credentials.
Implementing Graceful Secret Rollover
A common pitfall during secret rotation is updating the secret value before the application is ready to receive it, leading to connection failures. To avoid this, many teams implement a multi-stage rollover where both the old and new secrets are valid for a short window of time. This allows the application to restart and pick up the new configuration while still being able to authenticate with the old credentials if necessary.
In a GitOps context, this might involve deploying two different secret objects and updating the application configuration to check both locations. Once the application has successfully migrated to the new secret, the old one can be safely removed from the vault or the Git repository. This gradual approach significantly reduces the risk of service interruptions.
Establishing a Security-First Culture
No technical tool can replace a strong security culture within a development team. It is essential to educate all team members on the dangers of plain-text secrets and the proper use of the chosen encryption tools. Automated pre-commit hooks can be used to scan for potential secrets before they are ever pushed to the remote repository.
Tools like Gitleaks or TruffleHog can be integrated into your CI pipeline to serve as a safety net. These tools scan every commit for patterns that look like API keys or private keys and block the merge if any are found. This layered approach of prevention, encryption, and auditing ensures that your GitOps pipeline remains a secure and reliable part of your delivery process.
