Streamline multi-environment deployments with Amazon EKS Blueprints and CDK pipelines

May 23, 2025

This post was co-authored by Elamaran Shanmugam, Sr. Specialist Partner Solutions Architect – Containers, Mikhail Shapirov, Principal Partner Solutions Architect – Industry Solutions, Jayaprakash Alawala, Principal Specialist Solutions Architect – Containers, Bhavye Sharma, Partner Solutions Architect

Containers have revolutionized software delivery by providing a portable and consistent environment that addresses challenges related to complex frameworks and dependencies. Users are looking for ways to automate the deployment and maintenance of their Amazon Elastic Kubernetes Service (Amazon EKS) clusters across different versions, environments, accounts, and AWS Regions. The deployment of these clusters involves tasks such as creating your clusters with desired networking and logging configuration, selecting Amazon EKS add-ons, and, when it is ready, deploying other infrastructure components and Day 2 operational tooling.

This post shows how to set up automated pipelines for deploying and updating Amazon EKS infrastructure using the CDK pipelines module, which is part of Amazon EKS Blueprints for CDK. Amazon EKS fleet management is a very broad and complex topic that includes many aspects such as managing lifecycle of clusters, add-ons and applications. However, for this post we selected one of the common use cases, blue/green cluster upgrades, which can apply to multiple environments. It deploys two EKS clusters with a different version for blue and green deployment. It also deploys a sample application EchoServer in both EKS clusters and shows how to steer routing between the blue/green deployments using Amazon Route 53.

Solution architecture

We create a sample AWS CodePipeline using the EKS Blueprints Pipelines module that makes it easy to set up a continuous deployment pipeline for your AWS Cloud Development Kit (AWS CDK) applications. This pipeline creates two different EKS clusters, one on version 1.30 and another on 1.31, with a managed node group for each cluster. It also deploys controllers and operators such as AWS Load Balancer ControllerExternalDNS, Metrics Server, and Cert Manager.

It also includes a stage to swap the users of a sample application using a blue/green strategy across different clusters. The solutions need an existing Route 53 public domain (say example.org ) as a prerequisite. You create two subdomains (blue.example.org and green.example.org, instructions provided later in the post) in the Route 53 public domain. The ExternalDNS controller, which is implemented in each EKS cluster, generates a CNAME record within the corresponding subdomain that points to the AWS Load Balancer endpoint associated with the specific EchoServer application.

The following architecture diagram presented shows the overarching designed for the automated pipeline that facilitates the deployment and management of traffic between blue and green deployments.

Figure 1: Architecture diagram for CodePipeline using EKS Blueprints Pipeline

Figure 1: Architecture diagram for CodePipeline using EKS Blueprints Pipeline

CodePipeline stages:

  • Source: This stage fetches the source of your CDK app from your forked GitHub repo and triggers the pipeline every time you push new commits to it.
  • Create EKS Blueprints: This stage uses AWS CodeBuild to compile your code (if necessary) and performs a CDK synth. The output of that step is a cloud assembly, which is used to perform all actions in the rest of the pipeline. You can define the EKS blueprint framework that is deployed across the pipeline stages. The blueprint defines a specification of all components deployed into the cluster such as cluster configuration, managed node groups, add-ons, and applications. Although blue/green deployment typically uses identical blueprints for both environments, other scenarios may employ common base configurations with environment-specific customizations. For example, production environments often need enhanced computing power and scalability compared to development environments.
  • Pipeline Deploy IaC: This stage uses AWS CodeDeploy to deploy your CDK applications in two different Stacks that describe your EKS clusters, configuration, and components. You can create the pipeline using the CodePipeline The following code pipeline example shows how to create different pipeline stages using waves, for example eks-stage and dns-stage.
  • Manual approval: It needs manual approval to switch traffic from blue to green deployment or vice versa.
  • DNS switch: This stage updates your Route 53 record to point to the cluster specified as the production environment in your code. You can use the AWS CDK Route 53 module to add a CNAME record to the Route 53 parent public hosted zone.

The following architecture diagram shows the completed infrastructure established for this pipeline, using the solution depicted in the preceding diagram:

Figure 2: Amazon EKS and Route 53 infrastructure provisioned through CodePipeline

Figure 2: Amazon EKS and Route 53 infrastructure provisioned through CodePipeline

Prerequisites

The following prerequisites are necessary to complete this solution:

Walkthrough

At a high level, we use the following steps for deploying the infrastructure:

  1. Fork the sample repository.
  2. Create environment-specific parameters and secrets.
  3. Create subdomains for each cluster (for example blue.example.org and green.example.org) in Route 53.
  4. Change your fork and push changes.
  5. Deploy your AWS CDK stack(s).

Create AWS Secrets Manager secret

Create a GitHub personal access token (PAT) with scopes repo and admin:repo_hook using the following link or step-by-step guide. Create a plain-text secret to hold the PAT token in the desired region, and set its name as a value to the GITHUB_SECRET environment variable. The default value is cdk_blueprints_github_secret.

WARNING: When switching the CDK between AWS Regions, remember to replicate this secret.

Setup AWS Systems Manager Parameter Store

The solution expects the Route 53 public hosted zone ID and name to be available in AWS Systems Manager . Copy the hosted zone ID and name from the Route 53 console, as shown in the following figure.

Figure 3: Route 53 public hosted zone

Figure 3: Route 53 public hosted zone

Store the hosted zone ID and name in the AWS System Manager Parameter Store by using the following commands.

Create subdomains in Route 53 parent hosted zone

Create a subdomain for each cluster (for example blue.example.org and green.example.org) in the Route 53 parent hosted zone example.org. You also create a subdomain per cluster, so that each cluster application has its own domain name and you can swap traffic using the pipeline. For this example, you create blue and green subdomains for the domain name that you used in the previous step.

  1. Create the subdomain for example.org.
  1. Route 53 automatically assigns name servers when you create a new hosted zone, as shown in the following figure.
Figure 4: Route 53 public hosted zone for subdomain

Figure 4: Route 53 public hosted zone for subdomain

  1. You create a new NS record in the hosted zone for your parent domain (example.org), and you specify the four name servers that were created in Step 2.
  1. Repeat Steps 1–3 using example.org by setting the following environment variable.

Solution deployment

  1. To start, fork our sample repository and clone it. This repository contains AWS CDK v2 code written in TypeScript.
  1. Execute the following commands to bootstrap the AWS environment.
  1. Edit the following files:

Change GitHub Repo Owner from aws-samples to your own GitHub Handle in the file pipeline.ts.

Change the Parent hosted zone ID from example.org to your own public hosted zone in the file multi-cluster-builder.ts.

  1. After the changes are done, commit and push the changes to your repository using:
  1. Run the following command from the root of this repository to deploy the pipeline stack:

Initially, you must manually deploy your pipeline by using the command CDK deploy. After that, each change you push to your repository triggers your pipeline, which updates itself and executes. Your first execution takes a while, because some resources, such as EKS cluster(s) and managed node groups, may take a few minutes to be ready. You track the progress by accessing the pipeline through AWS CodePipeline.

This solution deploys the following components:

Two EKS clusters: It creates two EKS clusters one for blue and another green environment using Amazon EKS Blueprints for CDK. Each cluster is deployed with the following components:

  • AWSLoadBalancerController: The manages for a Kubernetes cluster. You can use the controller to expose your cluster apps to the internet. The controller provisions AWS load balancers that point to cluster Service or Ingress resources. This component is needed to create AWS Application Load Balancer for the ingress traffic for the EchoServer application deployed in the cluster.
  • ExternalDNS: ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers. This ExternalDNS controller in each cluster creates an Alias record for the ingress ALB in the corresponding Route 53 hosted zone/subdomain.
  • EchoServer: Installation of a sample application to validate that other components are working properly, such as AWS Load Balancer Controller and ExternalDNS.

Go to the CodePipeline console and make sure that Pipeline is deployed successfully, as shown in the following figures.

Figure 5: CodePipeline for EKS Infra and DNS Switch - Part1

Figure 5: CodePipeline for EKS Infra and DNS Switch – Part1

Figure 6: CodePipeline for EKS Infra and DNS Switch – Part2

Figure 6: CodePipeline for EKS Infra and DNS Switch – Part2

If you check your AWS CloudFormation stacks, then you should find a stack for the pipeline (EksPipelineStack) and one stack (with nested stacks) for each EKS cluster, as shown in the following figure.

Figure 7: CloudFormation stacks deployed through EKS Blueprints Pipeline

Figure 7: CloudFormation stacks deployed through EKS Blueprints Pipeline

Access to EKS clusters

In the output of your EKSCluster Stack(s), there are commands to set up your kubeconfig to access your cluster. Copy the command to the terminal and run it to get access clusters using kubectl, as shown in the following figures.

Figure 8: CloudFormation stack output for EKS Blue cluster

Figure 8: CloudFormation stack output for EKS Blue cluster

Figure 9: CloudFormation stack output for EKS green cluster

Figure 9: CloudFormation stack output for EKS green cluster

You can access the application directly from your browser using the URL echoserver.example.org

curl app.example.org

The pipeline configuration uses the variable prodEnv to switch routing to the target cluster for the EchoServer application. When you set it and push to your fork, you must manually approve this change in CodePipeline, as shown in the following figure.

Figure 10: Manual approval stage in CodePipeline for DNS switching

Figure 10: Manual approval stage in CodePipeline for DNS switching

After it finishes updating the DNS record and is propagated, you can check where the app.example.org record is pointing. To switch the traffic to the green cluster after upgrade to 1.31, change the prodEnv variable in the pipeline.ts and redeploy the stack.

Cleaning up

You must delete the provisioned resources to avoid unintended costs. To clean up the provisioned blueprint resources, run the following command:

Furthermore, you must go to the Amazon EKS console and delete two EKS clusters blue-cluster-blueprint and green-cluster-blueprint manually.

Conclusion

The Amazon EKS Blueprints Pipelines module allows you to set up a continuous deployment pipeline for your AWS CDK applications. In this post, we showed how you can set up a continuous deployment pipeline for your AWS CDK applications using the EKS Blueprints Pipelines module. This IaC approach delivers changes through an automated and standardized pipeline that is also defined in AWS CDK, allowing you to deploy and upgrade clusters consistently across different versions, environments, accounts, and AWS Regions while keeping track of your cluster and pipeline changes through Git. The sample code provides several examples of components commonly installed in EKS clusters, such as AWS Load Balancer Controller, ExternalDNS, and Metrics Server.

This demonstration can be used as a starting point to build your own solution to automate the deployment of your own EKS cluster(s). Visit the Amazon EKS Blueprints Quick Start documentation for more information on using these libraries. We encourage you to use the EKS Blueprints Pipelines module for your workloads.

For more information, see the following references: