> Java开发人员问我如何在AWS ECS上部署其Spring Boot API时,我认为这是潜入CDKTF(Terraform云开发套件)项目的最新更新的绝佳机会。 在上一篇文章中,我介绍了CDKTF,该框架使您可以使用Python等通用编程语言将基础结构编写为代码(IAC)。从那时起,CDKTF达到了第一个GA版本,使其成为重新访问它的最佳时机。在本文中,我们将使用CDKTF在AWS EC上部署Spring Boot API。
在我的github repo上找到本文的代码。
体系结构概述
>进入实施之前,让我们回顾一下我们旨在部署的体系结构:>
在此图中,我们可以将体系结构分解为03层:
网络
> vpc
- 公共和私人子网
Internet网关
- > nat网关
- 基础架构
- :
- 侦听器
ecs cluster
- >服务堆栈
- :
- > ECS服务
任务定义
- 步骤1:容器化您的春季启动应用程序
- 我们正在部署的Java API可在GitHub上使用。
>它定义了一个简单的REST API,具有三个端点: >
/ping
:返回字符串“ pong”。该端点对于测试API的响应能力很有用。它还可以增加Prometheus计数器指标以进行监视。
/HealthCheck
:返回“确定”,用作健康检查端点,以确保应用程序正确运行。喜欢 /ping,它更新了普罗米修斯计数器以进行可观察性。- >
- /Hello:接受名称查询参数(默认为“世界”)并返回个性化的问候,例如,“ Hello,[name]!”。此终点也与Prometheus计数器集成在一起。
- >让我们添加 dockerfile :
- > 我们的应用程序已准备好部署!
步骤2:设置AWS CDKTF
aws cdktf允许您使用Python定义和管理AWS资源。
FROM maven:3.9-amazoncorretto-21 AS builder WORKDIR /app COPY pom.xml . COPY src src RUN mvn clean package # amazon java distribution FROM amazoncorretto:21-alpine COPY --from=builder /app/target/*.jar /app/java-api.jar EXPOSE 8080 ENTRYPOINT ["java","-jar","/app/java-api.jar"]>先决条件
2。
安装CDKTF和依赖项
>确保您通过安装CDKTF及其依赖项具有必要的工具:>
- [**python (3.13)**](https://www.python.org/) - [**pipenv**](https://pipenv.pypa.io/en/latest/) - [**npm**](https://nodejs.org/en/)这安装了CDKTF CLI,该CLI允许为各种语言旋转新项目。
3。初始化您的CDKTF应用程序
>我们可以通过运行: 默认创建了很多文件,并且安装了所有依赖项。 下面是初始的 main.py 文件: 堆栈 表示 CDK for Terraform (CDKTF) 编译成不同 Terraform 配置的一组基础设施资源。堆栈为应用程序中的不同环境提供单独的状态管理。为了跨层共享资源,我们将利用跨堆栈引用。 将 network_stack.py 文件添加到您的项目 添加以下代码来创建所有网络资源: 然后,编辑main.py文件: 通过运行以下命令生成 terraform 配置文件: 使用以下命令部署网络堆栈: 我们的 VPC 已准备就绪,如下图所示: 将 infra_stack.py 文件添加到您的项目 添加以下代码来创建所有基础设施资源: 编辑main.py文件: 使用以下命令部署基础设施堆栈: 记下 ALB 的 DNS 名称,我们稍后会用到它。 将 service_stack.py 文件添加到您的项目 添加以下代码来创建所有 ECS 服务资源: 更新main.py(最后一次?): 使用以下命令部署服务堆栈: 我们来了! 我们成功创建了在 AWS ECS Fargate 上部署新服务的所有资源。 运行以下命令来获取您的堆栈列表 。启用GITHUB操作后,为您的存储库设置秘密和变量,创建.github/workflows/deploy.yml文件,然后添加以下内容: 该服务已成功部署,如下图所示:
ALB现在准备为流量服务!
通过利用AWS CDKTF,我们可以使用Python编写干净,可维护的IAC代码。这种方法简化了部署容器化的应用程序,例如AWS ECS Fargate上的Spring Boot API。
CDKTF的灵活性,再加上Terraform的强大功能,使其成为现代云部署的绝佳选择。
FROM maven:3.9-amazoncorretto-21 AS builder
WORKDIR /app
COPY pom.xml .
COPY src src
RUN mvn clean package
# amazon java distribution
FROM amazoncorretto:21-alpine
COPY --from=builder /app/target/*.jar /app/java-api.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app/java-api.jar"]
- [**python (3.13)**](https://www.python.org/)
- [**pipenv**](https://pipenv.pypa.io/en/latest/)
- [**npm**](https://nodejs.org/en/)
第 3 步:构建图层
1. 网络层
$ npm install -g cdktf-cli@latest
# init the project using aws provider
$ mkdir samples-fargate
$ cd samples-fargate && cdktf init --template=python --providers=aws
#!/usr/bin/env python
from constructs import Construct
from cdktf import App, TerraformStack
class MyStack(TerraformStack):
def __init__(self, scope: Construct, id: str):
super().__init__(scope, id)
# define resources here
app = App()
MyStack(app, "aws-cdktf-samples-fargate")
app.synth()
$ mkdir infra
$ cd infra && touch network_stack.py
from constructs import Construct
from cdktf import S3Backend, TerraformStack
from cdktf_cdktf_provider_aws.provider import AwsProvider
from cdktf_cdktf_provider_aws.vpc import Vpc
from cdktf_cdktf_provider_aws.subnet import Subnet
from cdktf_cdktf_provider_aws.eip import Eip
from cdktf_cdktf_provider_aws.nat_gateway import NatGateway
from cdktf_cdktf_provider_aws.route import Route
from cdktf_cdktf_provider_aws.route_table import RouteTable
from cdktf_cdktf_provider_aws.route_table_association import RouteTableAssociation
from cdktf_cdktf_provider_aws.internet_gateway import InternetGateway
class NetworkStack(TerraformStack):
def __init__(self, scope: Construct, ns: str, params: dict):
super().__init__(scope, ns)
self.region = params["region"]
# configure the AWS provider to use the us-east-1 region
AwsProvider(self, "AWS", region=self.region)
# use S3 as backend
S3Backend(
self,
bucket=params["backend_bucket"],
key=params["backend_key_prefix"] + "/network.tfstate",
region=self.region,
)
# create the vpc
vpc_demo = Vpc(self, "vpc-demo", cidr_block="192.168.0.0/16")
# create two public subnets
public_subnet1 = Subnet(
self,
"public-subnet-1",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}a",
cidr_block="192.168.1.0/24",
)
public_subnet2 = Subnet(
self,
"public-subnet-2",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}b",
cidr_block="192.168.2.0/24",
)
# create. the internet gateway
igw = InternetGateway(self, "igw", vpc_id=vpc_demo.id)
# create the public route table
public_rt = Route(
self,
"public-rt",
route_table_id=vpc_demo.main_route_table_id,
destination_cidr_block="0.0.0.0/0",
gateway_id=igw.id,
)
# create the private subnets
private_subnet1 = Subnet(
self,
"private-subnet-1",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}a",
cidr_block="192.168.10.0/24",
)
private_subnet2 = Subnet(
self,
"private-subnet-2",
vpc_id=vpc_demo.id,
availability_zone=f"{self.region}b",
cidr_block="192.168.20.0/24",
)
# create the Elastic IPs
eip1 = Eip(self, "nat-eip-1", depends_on=[igw])
eip2 = Eip(self, "nat-eip-2", depends_on=[igw])
# create the NAT Gateways
private_nat_gw1 = NatGateway(
self,
"private-nat-1",
subnet_id=public_subnet1.id,
allocation_id=eip1.id,
)
private_nat_gw2 = NatGateway(
self,
"private-nat-2",
subnet_id=public_subnet2.id,
allocation_id=eip2.id,
)
# create Route Tables
private_rt1 = RouteTable(self, "private-rt1", vpc_id=vpc_demo.id)
private_rt2 = RouteTable(self, "private-rt2", vpc_id=vpc_demo.id)
# add default routes to tables
Route(
self,
"private-rt1-default-route",
route_table_id=private_rt1.id,
destination_cidr_block="0.0.0.0/0",
nat_gateway_id=private_nat_gw1.id,
)
Route(
self,
"private-rt2-default-route",
route_table_id=private_rt2.id,
destination_cidr_block="0.0.0.0/0",
nat_gateway_id=private_nat_gw2.id,
)
# associate routes with subnets
RouteTableAssociation(
self,
"public-rt-association",
subnet_id=private_subnet2.id,
route_table_id=private_rt2.id,
)
RouteTableAssociation(
self,
"private-rt1-association",
subnet_id=private_subnet1.id,
route_table_id=private_rt1.id,
)
RouteTableAssociation(
self,
"private-rt2-association",
subnet_id=private_subnet2.id,
route_table_id=private_rt2.id,
)
# terraform outputs
self.vpc_id = vpc_demo.id
self.public_subnets = [public_subnet1.id, public_subnet2.id]
self.private_subnets = [private_subnet1.id, private_subnet2.id]
2. 基础设施层
#!/usr/bin/env python
from constructs import Construct
from cdktf import App, TerraformStack
from infra.network_stack import NetworkStack
ENV = "dev"
AWS_REGION = "us-east-1"
BACKEND_S3_BUCKET = "blog.abdelfare.me"
BACKEND_S3_KEY = f"{ENV}/cdktf-samples"
class MyStack(TerraformStack):
def __init__(self, scope: Construct, id: str):
super().__init__(scope, id)
# define resources here
app = App()
MyStack(app, "aws-cdktf-samples-fargate")
network = NetworkStack(
app,
"network",
{
"region": AWS_REGION,
"backend_bucket": BACKEND_S3_BUCKET,
"backend_key_prefix": BACKEND_S3_KEY,
},
)
app.synth()
$ cdktf synth
$ cdktf deploy network
$ cd infra && touch infra_stack.py
3. 服务层
from constructs import Construct
from cdktf import S3Backend, TerraformStack
from cdktf_cdktf_provider_aws.provider import AwsProvider
from cdktf_cdktf_provider_aws.ecs_cluster import EcsCluster
from cdktf_cdktf_provider_aws.lb import Lb
from cdktf_cdktf_provider_aws.lb_listener import (
LbListener,
LbListenerDefaultAction,
LbListenerDefaultActionFixedResponse,
)
from cdktf_cdktf_provider_aws.security_group import (
SecurityGroup,
SecurityGroupIngress,
SecurityGroupEgress,
)
class InfraStack(TerraformStack):
def __init__(self, scope: Construct, ns: str, network: dict, params: dict):
super().__init__(scope, ns)
self.region = params["region"]
# Configure the AWS provider to use the us-east-1 region
AwsProvider(self, "AWS", region=self.region)
# use S3 as backend
S3Backend(
self,
bucket=params["backend_bucket"],
key=params["backend_key_prefix"] + "/load_balancer.tfstate",
region=self.region,
)
# create the ALB security group
alb_sg = SecurityGroup(
self,
"alb-sg",
vpc_id=network["vpc_id"],
ingress=[
SecurityGroupIngress(
protocol="tcp", from_port=80, to_port=80, cidr_blocks=["0.0.0.0/0"]
)
],
egress=[
SecurityGroupEgress(
protocol="-1", from_port=0, to_port=0, cidr_blocks=["0.0.0.0/0"]
)
],
)
# create the ALB
alb = Lb(
self,
"alb",
internal=False,
load_balancer_type="application",
security_groups=[alb_sg.id],
subnets=network["public_subnets"],
)
# create the LB Listener
alb_listener = LbListener(
self,
"alb-listener",
load_balancer_arn=alb.arn,
port=80,
protocol="HTTP",
default_action=[
LbListenerDefaultAction(
type="fixed-response",
fixed_response=LbListenerDefaultActionFixedResponse(
content_type="text/plain",
status_code="404",
message_body="Could not find the resource you are looking for",
),
)
],
)
# create the ECS cluster
cluster = EcsCluster(self, "cluster", name=params["cluster_name"])
self.alb_arn = alb.arn
self.alb_listener = alb_listener.arn
self.alb_sg = alb_sg.id
self.cluster_id = cluster.id
...
CLUSTER_NAME = "cdktf-samples"
...
infra = InfraStack(
app,
"infra",
{
"vpc_id": network.vpc_id,
"public_subnets": network.public_subnets,
},
{
"region": AWS_REGION,
"backend_bucket": BACKEND_S3_BUCKET,
"backend_key_prefix": BACKEND_S3_KEY,
"cluster_name": CLUSTER_NAME,
},
)
...
$ cdktf deploy network infra
$ mkdir apps
$ cd apps && touch service_stack.py
from constructs import Construct
import json
from cdktf import S3Backend, TerraformStack, Token, TerraformOutput
from cdktf_cdktf_provider_aws.provider import AwsProvider
from cdktf_cdktf_provider_aws.ecs_service import (
EcsService,
EcsServiceLoadBalancer,
EcsServiceNetworkConfiguration,
)
from cdktf_cdktf_provider_aws.ecr_repository import (
EcrRepository,
EcrRepositoryImageScanningConfiguration,
)
from cdktf_cdktf_provider_aws.ecr_lifecycle_policy import EcrLifecyclePolicy
from cdktf_cdktf_provider_aws.ecs_task_definition import (
EcsTaskDefinition,
)
from cdktf_cdktf_provider_aws.lb_listener_rule import (
LbListenerRule,
LbListenerRuleAction,
LbListenerRuleCondition,
LbListenerRuleConditionPathPattern,
)
from cdktf_cdktf_provider_aws.lb_target_group import (
LbTargetGroup,
LbTargetGroupHealthCheck,
)
from cdktf_cdktf_provider_aws.security_group import (
SecurityGroup,
SecurityGroupIngress,
SecurityGroupEgress,
)
from cdktf_cdktf_provider_aws.cloudwatch_log_group import CloudwatchLogGroup
from cdktf_cdktf_provider_aws.data_aws_iam_policy_document import (
DataAwsIamPolicyDocument,
)
from cdktf_cdktf_provider_aws.iam_role import IamRole
from cdktf_cdktf_provider_aws.iam_role_policy_attachment import IamRolePolicyAttachment
class ServiceStack(TerraformStack):
def __init__(
self, scope: Construct, ns: str, network: dict, infra: dict, params: dict
):
super().__init__(scope, ns)
self.region = params["region"]
# Configure the AWS provider to use the us-east-1 region
AwsProvider(self, "AWS", region=self.region)
# use S3 as backend
S3Backend(
self,
bucket=params["backend_bucket"],
key=params["backend_key_prefix"] + "/" + params["app_name"] + ".tfstate",
region=self.region,
)
# create the service security group
svc_sg = SecurityGroup(
self,
"svc-sg",
vpc_id=network["vpc_id"],
ingress=[
SecurityGroupIngress(
protocol="tcp",
from_port=params["app_port"],
to_port=params["app_port"],
security_groups=[infra["alb_sg"]],
)
],
egress=[
SecurityGroupEgress(
protocol="-1", from_port=0, to_port=0, cidr_blocks=["0.0.0.0/0"]
)
],
)
# create the service target group
svc_tg = LbTargetGroup(
self,
"svc-target-group",
name="svc-tg",
port=params["app_port"],
protocol="HTTP",
vpc_id=network["vpc_id"],
target_type="ip",
health_check=LbTargetGroupHealthCheck(path="/ping", matcher="200"),
)
# create the service listener rule
LbListenerRule(
self,
"alb-rule",
listener_arn=infra["alb_listener"],
action=[LbListenerRuleAction(type="forward", target_group_arn=svc_tg.arn)],
condition=[
LbListenerRuleCondition(
path_pattern=LbListenerRuleConditionPathPattern(values=["/*"])
)
],
)
# create the ECR repository
repo = EcrRepository(
self,
params["app_name"],
image_scanning_configuration=EcrRepositoryImageScanningConfiguration(
scan_on_push=True
),
image_tag_mutability="MUTABLE",
name=params["app_name"],
)
EcrLifecyclePolicy(
self,
"this",
repository=repo.name,
policy=json.dumps(
{
"rules": [
{
"rulePriority": 1,
"description": "Keep last 10 images",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": ["v"],
"countType": "imageCountMoreThan",
"countNumber": 10,
},
"action": {"type": "expire"},
},
{
"rulePriority": 2,
"description": "Expire images older than 3 days",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 3,
},
"action": {"type": "expire"},
},
]
}
),
)
# create the service log group
service_log_group = CloudwatchLogGroup(
self,
"svc_log_group",
name=params["app_name"],
retention_in_days=1,
)
ecs_assume_role = DataAwsIamPolicyDocument(
self,
"assume_role",
statement=[
{
"actions": ["sts:AssumeRole"],
"principals": [
{
"identifiers": ["ecs-tasks.amazonaws.com"],
"type": "Service",
},
],
},
],
)
# create the service execution role
service_execution_role = IamRole(
self,
"service_execution_role",
assume_role_policy=ecs_assume_role.json,
name=params["app_name"] + "-exec-role",
)
IamRolePolicyAttachment(
self,
"ecs_role_policy",
policy_arn="arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy",
role=service_execution_role.name,
)
# create the service task role
service_task_role = IamRole(
self,
"service_task_role",
assume_role_policy=ecs_assume_role.json,
name=params["app_name"] + "-task-role",
)
# create the service task definition
task = EcsTaskDefinition(
self,
"svc-task",
family="service",
network_mode="awsvpc",
requires_compatibilities=["FARGATE"],
cpu="256",
memory="512",
task_role_arn=service_task_role.arn,
execution_role_arn=service_execution_role.arn,
container_definitions=json.dumps(
[
{
"name": "svc",
"image": f"{repo.repository_url}:latest",
"networkMode": "awsvpc",
"healthCheck": {
"Command": ["CMD-SHELL", "echo hello"],
"Interval": 5,
"Timeout": 2,
"Retries": 3,
},
"portMappings": [
{
"containerPort": params["app_port"],
"hostPort": params["app_port"],
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": service_log_group.name,
"awslogs-region": params["region"],
"awslogs-stream-prefix": params["app_name"],
},
},
}
]
),
)
# create the ECS service
EcsService(
self,
"ecs_service",
name=params["app_name"] + "-service",
cluster=infra["cluster_id"],
task_definition=task.arn,
desired_count=params["desired_count"],
launch_type="FARGATE",
force_new_deployment=True,
network_configuration=EcsServiceNetworkConfiguration(
subnets=network["private_subnets"],
security_groups=[svc_sg.id],
),
load_balancer=[
EcsServiceLoadBalancer(
target_group_arn=svc_tg.id,
container_name="svc",
container_port=params["app_port"],
)
],
)
TerraformOutput(
self,
"ecr_repository_url",
description="url of the ecr repo",
value=repo.repository_url,
)
步骤4:github动作工作流程
为了使部署自动化,让我们将一个github操作集成到我们的
我们的工作流程正常:FROM maven:3.9-amazoncorretto-21 AS builder
WORKDIR /app
COPY pom.xml .
COPY src src
RUN mvn clean package
# amazon java distribution
FROM amazoncorretto:21-alpine
COPY --from=builder /app/target/*.jar /app/java-api.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app/java-api.jar"]
>使用以下脚本测试您的部署(
替换ALB URL):
- [**python (3.13)**](https://www.python.org/)
- [**pipenv**](https://pipenv.pypa.io/en/latest/)
- [**npm**](https://nodejs.org/en/)
最终想法
> CDKTF项目为基础架构管理提供了许多有趣的功能,但我必须承认,我有时发现它有些冗长。>
您对CDKTF有任何经验吗?您是否在生产中使用过?
随时与我们分享您的经验。
以上是如何使用CDKTF在AWS EC上部署Springboot API?的详细内容。更多信息请关注PHP中文网其他相关文章!

pythonlistsareimplementedasdynamicarrays,notlinkedlists.1)他们areStoredIncoNtiguulMemoryBlocks,mayrequireRealLealLocationWhenAppendingItems,EmpactingPerformance.2)LinkesedlistSwoldOfferefeRefeRefeRefeRefficeInsertions/DeletionsButslowerIndexeDexedAccess,Lestpypytypypytypypytypy

pythonoffersFourmainMethodStoreMoveElement Fromalist:1)删除(值)emovesthefirstoccurrenceofavalue,2)pop(index)emovesanderturnsanelementataSpecifiedIndex,3)delstatementremoveselemsbybybyselementbybyindexorslicebybyindexorslice,and 4)

toresolvea“ dermissionded”错误Whenrunningascript,跟随台词:1)CheckAndAdjustTheScript'Spermissions ofchmod xmyscript.shtomakeitexecutable.2)nesureThEseRethEserethescriptistriptocriptibationalocatiforecationAdirectorywherewhereyOuhaveWritePerMissionsyOuhaveWritePermissionsyYouHaveWritePermissions,susteSyAsyOURHomeRecretectory。

ArraysarecrucialinPythonimageprocessingastheyenableefficientmanipulationandanalysisofimagedata.1)ImagesareconvertedtoNumPyarrays,withgrayscaleimagesas2Darraysandcolorimagesas3Darrays.2)Arraysallowforvectorizedoperations,enablingfastadjustmentslikebri

ArraySaresificatificallyfasterthanlistsForoperationsBenefiting fromDirectMemoryAcccccccCesandFixed-Sizestructures.1)conscessingElements:arraysprovideconstant-timeaccessduetocontoconcotigunmorystorage.2)iteration:araysleveragececacelocality.3)

ArraySareBetterForlement-WiseOperationsDuetofasterAccessCessCessCessCessCessAndOptimizedImplementations.1)ArrayshaveContiguucuulmemoryfordirectAccesscess.2)列出sareflexible butslible dueTopotentEnallymideNamicizing.3)forlarargedAtaTasetsetsetsetsetsetsetsetsetsetsetlib

在NumPy中进行整个数组的数学运算可以通过向量化操作高效实现。 1)使用简单运算符如加法(arr 2)可对数组进行运算。 2)NumPy使用C语言底层库,提升了运算速度。 3)可以进行乘法、除法、指数等复杂运算。 4)需注意广播操作,确保数组形状兼容。 5)使用NumPy函数如np.sum()能显着提高性能。

在Python中,向列表插入元素有两种主要方法:1)使用insert(index,value)方法,可以在指定索引处插入元素,但在大列表开头插入效率低;2)使用append(value)方法,在列表末尾添加元素,效率高。对于大列表,建议使用append()或考虑使用deque或NumPy数组来优化性能。


热AI工具

Undresser.AI Undress
人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover
用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool
免费脱衣服图片

Clothoff.io
AI脱衣机

Video Face Swap
使用我们完全免费的人工智能换脸工具轻松在任何视频中换脸!

热门文章

热工具

VSCode Windows 64位 下载
微软推出的免费、功能强大的一款IDE编辑器

DVWA
Damn Vulnerable Web App (DVWA) 是一个PHP/MySQL的Web应用程序,非常容易受到攻击。它的主要目标是成为安全专业人员在合法环境中测试自己的技能和工具的辅助工具,帮助Web开发人员更好地理解保护Web应用程序的过程,并帮助教师/学生在课堂环境中教授/学习Web应用程序安全。DVWA的目标是通过简单直接的界面练习一些最常见的Web漏洞,难度各不相同。请注意,该软件中

EditPlus 中文破解版
体积小,语法高亮,不支持代码提示功能

mPDF
mPDF是一个PHP库,可以从UTF-8编码的HTML生成PDF文件。原作者Ian Back编写mPDF以从他的网站上“即时”输出PDF文件,并处理不同的语言。与原始脚本如HTML2FPDF相比,它的速度较慢,并且在使用Unicode字体时生成的文件较大,但支持CSS样式等,并进行了大量增强。支持几乎所有语言,包括RTL(阿拉伯语和希伯来语)和CJK(中日韩)。支持嵌套的块级元素(如P、DIV),

禅工作室 13.0.1
功能强大的PHP集成开发环境