> Java開發人員問我如何在AWS ECS上部署其Spring Boot API時,我認為這是潛入CDKTF(Terraform雲開發套件)項目的最新更新的絕佳機會。 在上一篇文章中,我介紹了CDKTF,該框架使您可以使用Python等通用編程語言將基礎結構編寫為代碼(IAC)。從那時起,CDKTF達到了第一個GA版本,使其成為重新訪問它的最佳時機。在本文中,我們將使用CDKTF在AWS EC上部署Spring Boot API。
在我的github repo上找到本文的代碼。
體系結構概述
在此圖中,我們可以將體系結構分解為03層:
網絡
:> vpc
/HealthCheck
:返回“確定”,用作健康檢查端點,以確保應用程序正確運行。喜歡 /ping,它更新了普羅米修斯計數器以觀察到。
步驟2:設置AWS CDKTF
aws cdktf允許您使用Python定義和管理AWS資源。
FROM maven:3.9-amazoncorretto-21 AS builder WORKDIR /app COPY pom.xml . COPY src src RUN mvn clean package # amazon java distribution FROM amazoncorretto:21-alpine COPY --from=builder /app/target/*.jar /app/java-api.jar EXPOSE 8080 ENTRYPOINT ["java","-jar","/app/java-api.jar"]>先決條件
安裝CDKTF和依賴項
- [**python (3.13)**](https://www.python.org/) - [**pipenv**](https://pipenv.pypa.io/en/latest/) - [**npm**](https://nodejs.org/en/)這安裝了CDKTF CLI,該CLI允許為各種語言旋轉新項目。
>我們可以通過運行: 默認情況下創建了許多文件,並且所有依賴項已安裝。
堆棧代表一組基礎結構資源,這些資源CDK為Terraform(CDKTF)編譯為獨特的Terraform配置。堆棧為應用程序中的不同環境啟用單獨的狀態管理。要跨層共享資源,我們將利用交叉堆棧參考。
main.py 文件:
2。
添加以下代碼以創建所有基礎架構資源:
:>
service_stack.py 添加到您的項目
>更新main.py(最後一次?):>
服務堆棧 在這裡我們走!
。啟用GITHUB操作後,為您的儲存庫設定秘密和變量,建立.github/workflows/deploy.yml文件,然後新增以下內容: 本服務已成功部署,如下圖所示:
>使用以下腳本測試您的部署(
ALB現在準備為流量服務!
透過利用AWS CDKTF,我們可以使用Python編寫乾淨,可維護的IAC程式碼。這種方法簡化了部署容器化的應用程序,例如AWS ECS Fargate上的Spring Boot API。
CDKTF的靈活性,再加上Terraform的強大功能,使其成為現代雲端部署的絕佳選擇。
FROM maven:3.9-amazoncorretto-21 AS builder
WORKDIR /app
COPY pom.xml .
COPY src src
RUN mvn clean package
# amazon java distribution
FROM amazoncorretto:21-alpine
COPY --from=builder /app/target/*.jar /app/java-api.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app/java-api.jar"]
- [**python (3.13)**](https://www.python.org/)
- [**pipenv**](https://pipenv.pypa.io/en/latest/)
- [**npm**](https://nodejs.org/en/)
步驟3:建築層
a 網絡層
>將network_stack.py文件添加到您的項目
$ npm install -g cdktf-cli@latest
添加以下代碼以創建所有網絡資源:
# init the project using aws provider
$ mkdir samples-fargate
$ cd samples-fargate && cdktf init --template=python --providers=aws
然後,編輯
#!/usr/bin/env python
from constructs import Construct
from cdktf import App, TerraformStack
class MyStack(TerraformStack):
def __init__(self, scope: Construct, id: str):
super().__init__(scope, id)
# define resources here
app = App()
MyStack(app, "aws-cdktf-samples-fargate")
app.synth()
>用以下部件部署網絡堆棧$ mkdir infra
$ cd infra && touch network_stack.py
from constructs import Construct
from cdktf import S3Backend, TerraformStack
from cdktf_cdktf_provider_aws.provider import AwsProvider
from cdktf_cdktf_provider_aws.vpc import Vpc
from cdktf_cdktf_provider_aws.subnet import Subnet
from cdktf_cdktf_provider_aws.eip import Eip
from cdktf_cdktf_provider_aws.nat_gateway import NatGateway
from cdktf_cdktf_provider_aws.route import Route
from cdktf_cdktf_provider_aws.route_table import RouteTable
from cdktf_cdktf_provider_aws.route_table_association import RouteTableAssociation
from cdktf_cdktf_provider_aws.internet_gateway import InternetGateway
class NetworkStack(TerraformStack):
def __init__(self, scope: Construct, ns: str, params: dict):
super().__init__(scope, ns)
self.region = params["region"]
# configure the AWS provider to use the us-east-1 region
AwsProvider(self, "AWS", region=self.region)
# use S3 as backend
S3Backend(
self,
bucket=params["backend_bucket"],
key=params["backend_key_prefix"] + "/network.tfstate",
region=self.region,
)
# create the vpc
vpc_demo = Vpc(self, "vpc-demo", cidr_block="192.168.0.0/16")
# create two public subnets
public_subnet1 = Subnet(
self,
"public-subnet-1",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}a",
cidr_block="192.168.1.0/24",
)
public_subnet2 = Subnet(
self,
"public-subnet-2",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}b",
cidr_block="192.168.2.0/24",
)
# create. the internet gateway
igw = InternetGateway(self, "igw", vpc_id=vpc_demo.id)
# create the public route table
public_rt = Route(
self,
"public-rt",
route_table_id=vpc_demo.main_route_table_id,
destination_cidr_block="0.0.0.0/0",
gateway_id=igw.id,
)
# create the private subnets
private_subnet1 = Subnet(
self,
"private-subnet-1",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}a",
cidr_block="192.168.10.0/24",
)
private_subnet2 = Subnet(
self,
"private-subnet-2",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}b",
cidr_block="192.168.20.0/24",
)
# create the Elastic IPs
eip1 = Eip(self, "nat-eip-1", depends_on=[igw])
eip2 = Eip(self, "nat-eip-2", depends_on=[igw])
# create the NAT Gateways
private_nat_gw1 = NatGateway(
self,
"private-nat-1",
subnet_id=public_subnet1.id,
allocation_id=eip1.id,
)
private_nat_gw2 = NatGateway(
self,
"private-nat-2",
subnet_id=public_subnet2.id,
allocation_id=eip2.id,
)
# create Route Tables
private_rt1 = RouteTable(self, "private-rt1", vpc_id=vpc_demo.id)
private_rt2 = RouteTable(self, "private-rt2", vpc_id=vpc_demo.id)
# add default routes to tables
Route(
self,
"private-rt1-default-route",
route_table_id=private_rt1.id,
destination_cidr_block="0.0.0.0/0",
nat_gateway_id=private_nat_gw1.id,
)
Route(
self,
"private-rt2-default-route",
route_table_id=private_rt2.id,
destination_cidr_block="0.0.0.0/0",
nat_gateway_id=private_nat_gw2.id,
)
# associate routes with subnets
RouteTableAssociation(
self,
"public-rt-association",
subnet_id=private_subnet2.id,
route_table_id=private_rt2.id,
)
RouteTableAssociation(
self,
"private-rt1-association",
subnet_id=private_subnet1.id,
route_table_id=private_rt1.id,
)
RouteTableAssociation(
self,
"private-rt2-association",
subnet_id=private_subnet2.id,
route_table_id=private_rt2.id,
)
# terraform outputs
self.vpc_id = vpc_demo.id
self.public_subnets = [public_subnet1.id, public_subnet2.id]
self.private_subnets = [private_subnet1.id, private_subnet2.id]
infra_stack.py文件
編輯#!/usr/bin/env python
from constructs import Construct
from cdktf import App, TerraformStack
from infra.network_stack import NetworkStack
ENV = "dev"
AWS_REGION = "us-east-1"
BACKEND_S3_BUCKET = "blog.abdelfare.me"
BACKEND_S3_KEY = f"{ENV}/cdktf-samples"
class MyStack(TerraformStack):
def __init__(self, scope: Construct, id: str):
super().__init__(scope, id)
# define resources here
app = App()
MyStack(app, "aws-cdktf-samples-fargate")
network = NetworkStack(
app,
"network",
{
"region": AWS_REGION,
"backend_bucket": BACKEND_S3_BUCKET,
"backend_key_prefix": BACKEND_S3_KEY,
},
)
app.synth()
文件:
$ cdktf synth
堆棧
$ cdktf deploy network
>
3。 $ cd infra && touch infra_stack.py
添加以下代碼以創建所有ECS服務資源:
from constructs import Construct
from cdktf import S3Backend, TerraformStack
from cdktf_cdktf_provider_aws.provider import AwsProvider
from cdktf_cdktf_provider_aws.ecs_cluster import EcsCluster
from cdktf_cdktf_provider_aws.lb import Lb
from cdktf_cdktf_provider_aws.lb_listener import (
LbListener,
LbListenerDefaultAction,
LbListenerDefaultActionFixedResponse,
)
from cdktf_cdktf_provider_aws.security_group import (
SecurityGroup,
SecurityGroupIngress,
SecurityGroupEgress,
)
class InfraStack(TerraformStack):
def __init__(self, scope: Construct, ns: str, network: dict, params: dict):
super().__init__(scope, ns)
self.region = params["region"]
# Configure the AWS provider to use the us-east-1 region
AwsProvider(self, "AWS", region=self.region)
# use S3 as backend
S3Backend(
self,
bucket=params["backend_bucket"],
key=params["backend_key_prefix"] + "/load_balancer.tfstate",
region=self.region,
)
# create the ALB security group
alb_sg = SecurityGroup(
self,
"alb-sg",
vpc_id=network["vpc_id"],
ingress=[
SecurityGroupIngress(
protocol="tcp", from_port=80, to_port=80, cidr_blocks=["0.0.0.0/0"]
)
],
egress=[
SecurityGroupEgress(
protocol="-1", from_port=0, to_port=0, cidr_blocks=["0.0.0.0/0"]
)
],
)
# create the ALB
alb = Lb(
self,
"alb",
internal=False,
load_balancer_type="application",
security_groups=[alb_sg.id],
subnets=network["public_subnets"],
)
# create the LB Listener
alb_listener = LbListener(
self,
"alb-listener",
load_balancer_arn=alb.arn,
port=80,
protocol="HTTP",
default_action=[
LbListenerDefaultAction(
type="fixed-response",
fixed_response=LbListenerDefaultActionFixedResponse(
content_type="text/plain",
status_code="404",
message_body="Could not find the resource you are looking for",
),
)
],
)
# create the ECS cluster
cluster = EcsCluster(self, "cluster", name=params["cluster_name"])
self.alb_arn = alb.arn
self.alb_listener = alb_listener.arn
self.alb_sg = alb_sg.id
self.cluster_id = cluster.id
>用以下部件部署
...
CLUSTER_NAME = "cdktf-samples"
...
infra = InfraStack(
app,
"infra",
{
"vpc_id": network.vpc_id,
"public_subnets": network.public_subnets,
},
{
"region": AWS_REGION,
"backend_bucket": BACKEND_S3_BUCKET,
"backend_key_prefix": BACKEND_S3_KEY,
"cluster_name": CLUSTER_NAME,
},
)
...
$ cdktf deploy network infra
>運行以下內容以獲取堆棧列表$ mkdir apps
$ cd apps && touch service_stack.py
步驟4:github動作工作流程
為了使部署自動化,讓我們將一個github操作整合到我們的
我們的工作流程正常:FROM maven:3.9-amazoncorretto-21 AS builder
WORKDIR /app
COPY pom.xml .
COPY src src
RUN mvn clean package
# amazon java distribution
FROM amazoncorretto:21-alpine
COPY --from=builder /app/target/*.jar /app/java-api.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app/java-api.jar"]
替換ALB URL):
- [**python (3.13)**](https://www.python.org/)
- [**pipenv**](https://pipenv.pypa.io/en/latest/)
- [**npm**](https://nodejs.org/en/)
最終想法
> CDKTF專案為基礎架構管理提供了許多有趣的功能,但我必須承認,我有時會發現它有些冗長。 >
您對CDKTF有任何經驗嗎?您是否在生產中使用過?
隨時與我們分享您的經驗。
以上是如何使用CDKTF在AWS ECS上部署SpringBoot API?的詳細內容。更多資訊請關注PHP中文網其他相關文章!