2021年5月29日土曜日

kOps と Terraform で AWS 上に Kubernetes クラスタを作成して運用する

kOps と Terraform を組み合わせて、Simple single zone architecture with an internet gateway - AWS Network Firewall の「Example route tables in the single zone architecture with the firewall」構成で、 Kubernetes クラスタをデプロイする。

基本的な流れは Deploying Kubernetes clusters with kops and Terraform | by Alberto Alvarez | Bench Engineering | Medium と同じ。

前提

  • OS: Windows 10 Pro
  • Docker: Docker version 20.10.6, build 370c289

AWS の事前準備

管理用 IAM ユーザー作成

ユーザーグループ追加

  • ユーザーグループ名: webide-admin
  • アクセス許可ポリシー
    • AmazonEC2FullAccess
    • AmazonS3FullAccess
    • AmazonVPCFullAccess
    • IAMFullAccess
    • CloudWatchFullAccess
    • インラインポリシーで Network Firewall にフルアクセスできるように設定
  • Terraform のバックエンド用 S3 バケット作成済み
    • バケット名: mikoto2000-terraform-state
    • リージョン: ap-northeast-1
  • kOps の state store 用 S3 バケット作成済み
    • バケット名: eclipse-che.k8s.local-kops-state-store
    • リージョン: ap-northeast-1
  • Elastic IP 取得済み
    • EIP ID: eipalloc-0562e26282f012ea8

ユーザー追加

  • ユーザー名: webide-admin
  • アクセスの種類: プログラムによるアクセス
  • アクセス許可の設定: webide-admin グループに追加

アクセスキー ID とシークレットアクセスキーをメモ。

作業環境立ち上げ

docker run -it --rm -v "$(pwd):/work" --workdir "/work" mikoto2000/eks-tools

必要な情報整理

export AWS_ACCESS_KEY_ID=AKIAW4LRTYWPDXL5DCOF
export AWS_SECRET_ACCESS_KEY=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

PREFIX=eclipse-che
export KOPS_CLUSTER_NAME=${PREFIX}.k8s.local
export KOPS_STATE_STORE=s3://${KOPS_CLUSTER_NAME}-kops-state-store

export DOMAIN="35-74-43-49.nip.io"
export TF_VAR_cluster_name=$KOPS_CLUSTER_NAME
export TF_VAR_domain="35-74-43-49.nip.io"
WORK_DIR=/work

ssh-keygen

Terraform で VPC, Firewall を構築

以下の tf ファイルを作成し、 Terraform でデプロイ。

${WORK_DIR}/terraform/main.tf

# 以下 URL の
# 「Example route tables in the single zone architecture with the firewall」
# の構成をデプロイする
#
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/arch-single-zone-igw.html

# variables
variable "domain" {}
variable cluster_name {}

# local values
locals {
  cluster_tfname               = replace("${var.cluster_name}", ".", "-")
  node_subnet_ids              = [aws_subnet.ap-northeast-1a-k8s-cluster.id]
  region                       = "ap-northeast-1"
  route_table_public_id        = aws_route_table.k8s-cluster.id
  subnet_ap-northeast-1a_id    = aws_subnet.ap-northeast-1a-k8s-cluster.id
  subnet_ap-northeast-1a-fw_id = aws_subnet.ap-northeast-1a-k8s-cluster-fw.id
  vpc_cidr_block               = aws_vpc.k8s-cluster.cidr_block
  vpc_id                       = aws_vpc.k8s-cluster.id
  fwrg_k8s_aws_arn             = aws_networkfirewall_rule_group.k8s-aws.arn
  fwrg_k8s_aws_id              = aws_networkfirewall_rule_group.k8s-aws.id
}

output "cluster_name" {
  value = "${var.cluster_name}"
}

output "cluster_tfname" {
  value = local.cluster_tfname
}

output "domain" {
  value = "${var.domain}"
}

output "node_subnet_ids" {
  value = [aws_subnet.ap-northeast-1a-k8s-cluster.id]
}

output "region" {
  value = "ap-northeast-1"
}

output "route_table_public_id" {
  value = aws_route_table.k8s-cluster.id
}

output "subnet_ap-northeast-1a_id" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.id
}

output "subnet_ap-northeast-1a_cidr_block" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.cidr_block
}

output "subnet_ap-northeast-1a-fw_id" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.id
}

output "subnet_ap-northeast-1a-fw_cidr_block" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.cidr_block
}

output "vpc_cidr_block" {
  value = aws_vpc.k8s-cluster.cidr_block
}

output "vpc_id" {
  value = aws_vpc.k8s-cluster.id
}

output "subnets" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster-fw.*.id
}

output "subnet_count" {
  value = length(aws_subnet.ap-northeast-1a-k8s-cluster-fw.*.id)
}

output "fwrg_k8s_aws_arn" {
  value = aws_networkfirewall_rule_group.k8s-aws.arn
}

output "fwrg_k8s_aws_id" {
  value = aws_networkfirewall_rule_group.k8s-aws.id
}

output "firewall_status" {
  value = tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states)
}

output "firewall_endpoint_id" {
  value = element([for ss in tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states) : ss.attachment[0].endpoint_id if ss.attachment[0].subnet_id == aws_subnet.ap-northeast-1a-k8s-cluster-fw.id], 0)
}

output "AmazonEKSClusterAutoscalerPolicy_arn" {
  value = aws_iam_policy.AmazonEKSClusterAutoscalerPolicy.arn
}

output "AWSLoadBalancerControllerIAMPolicy_arn" {
  value = aws_iam_policy.AWSLoadBalancerControllerIAMPolicy.arn
}


# VPC

resource "aws_vpc" "k8s-cluster" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    "KubernetesCluster"                           = "${var.cluster_name}"
    "Name"                                        = "${var.cluster_name}"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }
}

resource "aws_subnet" "ap-northeast-1a-k8s-cluster-fw" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "10.0.4.0/28"
  tags = {
    "Name"                                        = "ap-northeast-1a.${var.cluster_name}-fw"
    "SubnetType"                                  = "Public"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_subnet" "ap-northeast-1a-k8s-cluster" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "10.0.2.0/28"
  tags = {
    "KubernetesCluster"                           = "${var.cluster_name}"
    "Name"                                        = "ap-northeast-1a.${var.cluster_name}"
    "SubnetType"                                  = "Public"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_vpc_dhcp_options_association" "k8s-cluster" {
  dhcp_options_id = aws_vpc_dhcp_options.k8s-cluster.id
  vpc_id          = aws_vpc.k8s-cluster.id
}

resource "aws_vpc_dhcp_options" "k8s-cluster" {
  domain_name         = "ap-northeast-1.compute.internal"
  domain_name_servers = ["AmazonProvidedDNS"]
  tags = {
    "Name"                                        = "${var.cluster_name}"
  }
}

resource "aws_internet_gateway" "k8s-cluster" {
  tags = {
    "Name"                                        = "${var.cluster_name}"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}


# Route Table

resource "aws_route_table" "igw2fw" {
  tags = {
    "Name"                                        = "${var.cluster_name}-igw2fw"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_route" "igw2fw-route-0-0-0-0--0" {
  vpc_endpoint_id        = element([for ss in tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states) : ss.attachment[0].endpoint_id if ss.attachment[0].subnet_id == aws_subnet.ap-northeast-1a-k8s-cluster-fw.id], 0)
  destination_cidr_block = "10.0.2.0/28"
  route_table_id         = aws_route_table.igw2fw.id
}

resource "aws_route_table_association" "igw2fw-association" {
  gateway_id = aws_internet_gateway.k8s-cluster.id
  route_table_id = aws_route_table.igw2fw.id
}

resource "aws_route_table" "fw2igw" {
  tags = {
    "Name"                                        = "${var.cluster_name}-fw2igw"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_route" "fw2igw-route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  gateway_id = aws_internet_gateway.k8s-cluster.id
  route_table_id         = aws_route_table.fw2igw.id
}

resource "aws_route_table_association" "fw2igw-association" {
  subnet_id = aws_subnet.ap-northeast-1a-k8s-cluster-fw.id
  route_table_id = aws_route_table.fw2igw.id
}

resource "aws_route_table_association" "ap-northeast-1a-k8s-cluster" {
  route_table_id = aws_route_table.k8s-cluster.id
  subnet_id      = aws_subnet.ap-northeast-1a-k8s-cluster.id
}

resource "aws_route" "route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  vpc_endpoint_id        = element([for ss in tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states) : ss.attachment[0].endpoint_id if ss.attachment[0].subnet_id == aws_subnet.ap-northeast-1a-k8s-cluster-fw.id], 0)
  route_table_id         = aws_route_table.k8s-cluster.id
}

resource "aws_route_table" "k8s-cluster" {
  tags = {
    "KubernetesCluster"                           = "${var.cluster_name}"
    "Name"                                        = "${var.cluster_name}"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}


# Firewall
resource "aws_networkfirewall_rule_group" "k8s-aws" {
  capacity = 30
  name     = "k8s-aws"
  type     = "STATEFUL"
  rule_group {
    rules_source {
      rules_source_list {
        generated_rules_type = "ALLOWLIST"
        target_types         = ["HTTP_HOST", "TLS_SNI"]
        targets              = [
                                 "motd.ubuntu.com",
                                 "entropy.ubuntu.com",
                                 "sts.amazonaws.com",
                                 ".ap-northeast-1.amazonaws.com",
                                 "security.ubuntu.com",
                                 "ap-northeast-1.ec2.archive.ubuntu.com",
                                 "artifacts.k8s.io",
                                 "api.snapcraft.io",
                                 "api.ecr.us-west-2.amazonaws.com",
                                 "602401143452.dkr.ecr.us-west-2.amazonaws.com",
                                 "prod-us-west-2-starport-layer-bucket.s3.us-west-2.amazonaws.com",
                                 "k8s.gcr.io",
                                 "pricing.us-east-1.amazonaws.com"
                               ]
      }
    }
  }

  tags = {}
}

resource "aws_networkfirewall_rule_group" "che" {
  capacity = 30
  name     = "che"
  type     = "STATEFUL"
  rule_group {
    rules_source {
      rules_source_list {
        generated_rules_type = "ALLOWLIST"
        target_types         = ["HTTP_HOST", "TLS_SNI"]
        targets              = [
                                 ".${var.domain}",
                                 "registry.access.redhat.com",
                                 "production.cloudflare.docker.com",
                                 "storage.googleapis.com",
                                 "grafana.github.io",
                                 "prometheus-community.github.com",
                                 "github.com",
                                 "github-releases.githubusercontent.com",
                                 ".docker.io",
                                 ".quay.io",
                                 "quayio-production-s3.s3.amazonaws.com"
                               ]
      }
    }
  }

  tags = {}
}

resource "aws_networkfirewall_firewall_policy" "k8s-cluster-fwp" {
  name = "${local.cluster_tfname}-fwp"

  firewall_policy {
    stateless_default_actions          = ["aws:forward_to_sfe"]
    stateless_fragment_default_actions = ["aws:forward_to_sfe"]
    stateful_rule_group_reference {
      resource_arn = aws_networkfirewall_rule_group.k8s-aws.arn
    }
    stateful_rule_group_reference {
      resource_arn = aws_networkfirewall_rule_group.che.arn
    }
  }

  tags = {}
}

resource "aws_networkfirewall_firewall" "k8s-cluster-fw" {
  name                = "${local.cluster_tfname}-fw"
  firewall_policy_arn = aws_networkfirewall_firewall_policy.k8s-cluster-fwp.arn
  vpc_id              = aws_vpc.k8s-cluster.id
  subnet_mapping {
    subnet_id = aws_subnet.ap-northeast-1a-k8s-cluster-fw.id
  }

  tags = {}
}

resource "aws_networkfirewall_logging_configuration" "k8s-cluster-fw-logging-configuration" {
  firewall_arn = aws_networkfirewall_firewall.k8s-cluster-fw.arn
  logging_configuration {
    log_destination_config {
      log_destination = {
        logGroup = aws_cloudwatch_log_group.k8s-cluster-fw-log.name
      }
      log_destination_type = "CloudWatchLogs"
      log_type             = "ALERT"
    }
    log_destination_config {
      log_destination = {
        logGroup = aws_cloudwatch_log_group.k8s-cluster-fw-log.name
      }
      log_destination_type = "CloudWatchLogs"
      log_type             = "FLOW"
    }
  }
}

resource "aws_cloudwatch_log_group" "k8s-cluster-fw-log" {
  name = "${var.cluster_name}-fw-log"
}


# IAM


# See: [IAM Permissions - aws-load-balancer-controller/installation.md at main · kubernetes-sigs/aws-load-balancer-controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/deploy/installation.md#iam-permissions)
resource "aws_iam_policy" "AWSLoadBalancerControllerIAMPolicy" {
  name        = "AWSLoadBalancerControllerIAMPolicy"

  policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": [
          "autoscaling:DescribeAutoScalingGroups",
          "autoscaling:DescribeAutoScalingInstances",
          "autoscaling:DescribeLaunchConfigurations",
          "autoscaling:DescribeTags",
          "autoscaling:SetDesiredCapacity",
          "autoscaling:TerminateInstanceInAutoScalingGroup",
          "ec2:DescribeLaunchTemplateVersions"
        ],
        "Resource": "*",
        "Effect": "Allow"
      }
    ]
  })
}

# See: [IAM ポリシーとロールを作成する - Cluster Autoscaler - Amazon EKS](https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/cluster-autoscaler.html#ca-create-policy)
resource "aws_iam_policy" "AmazonEKSClusterAutoscalerPolicy" {
  name        = "AmazonEKSClusterAutoscalerPolicy"

  policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "iam:CreateServiceLinkedRole",
          "ec2:DescribeAccountAttributes",
          "ec2:DescribeAddresses",
          "ec2:DescribeAvailabilityZones",
          "ec2:DescribeInternetGateways",
          "ec2:DescribeVpcs",
          "ec2:DescribeSubnets",
          "ec2:DescribeSecurityGroups",
          "ec2:DescribeInstances",
          "ec2:DescribeNetworkInterfaces",
          "ec2:DescribeTags",
          "ec2:GetCoipPoolUsage",
          "ec2:DescribeCoipPools",
          "elasticloadbalancing:DescribeLoadBalancers",
          "elasticloadbalancing:DescribeLoadBalancerAttributes",
          "elasticloadbalancing:DescribeListeners",
          "elasticloadbalancing:DescribeListenerCertificates",
          "elasticloadbalancing:DescribeSSLPolicies",
          "elasticloadbalancing:DescribeRules",
          "elasticloadbalancing:DescribeTargetGroups",
          "elasticloadbalancing:DescribeTargetGroupAttributes",
          "elasticloadbalancing:DescribeTargetHealth",
          "elasticloadbalancing:DescribeTags"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "cognito-idp:DescribeUserPoolClient",
          "acm:ListCertificates",
          "acm:DescribeCertificate",
          "iam:ListServerCertificates",
          "iam:GetServerCertificate",
          "waf-regional:GetWebACL",
          "waf-regional:GetWebACLForResource",
          "waf-regional:AssociateWebACL",
          "waf-regional:DisassociateWebACL",
          "wafv2:GetWebACL",
          "wafv2:GetWebACLForResource",
          "wafv2:AssociateWebACL",
          "wafv2:DisassociateWebACL",
          "shield:GetSubscriptionState",
          "shield:DescribeProtection",
          "shield:CreateProtection",
          "shield:DeleteProtection"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:AuthorizeSecurityGroupIngress",
          "ec2:RevokeSecurityGroupIngress"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:CreateSecurityGroup"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:CreateTags"
        ],
        "Resource": "arn:aws:ec2:*:*:security-group/*",
        "Condition": {
          "StringEquals": {
            "ec2:CreateAction": "CreateSecurityGroup"
          },
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:CreateTags",
          "ec2:DeleteTags"
        ],
        "Resource": "arn:aws:ec2:*:*:security-group/*",
        "Condition": {
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:AuthorizeSecurityGroupIngress",
          "ec2:RevokeSecurityGroupIngress",
          "ec2:DeleteSecurityGroup"
        ],
        "Resource": "*",
        "Condition": {
          "Null": {
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:CreateLoadBalancer",
          "elasticloadbalancing:CreateTargetGroup"
        ],
        "Resource": "*",
        "Condition": {
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:CreateListener",
          "elasticloadbalancing:DeleteListener",
          "elasticloadbalancing:CreateRule",
          "elasticloadbalancing:DeleteRule"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:AddTags",
          "elasticloadbalancing:RemoveTags"
        ],
        "Resource": [
          "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
          "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
          "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
        ],
        "Condition": {
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:AddTags",
          "elasticloadbalancing:RemoveTags"
        ],
        "Resource": [
          "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*",
          "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*",
          "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*",
          "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*"
        ]
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:ModifyLoadBalancerAttributes",
          "elasticloadbalancing:SetIpAddressType",
          "elasticloadbalancing:SetSecurityGroups",
          "elasticloadbalancing:SetSubnets",
          "elasticloadbalancing:DeleteLoadBalancer",
          "elasticloadbalancing:ModifyTargetGroup",
          "elasticloadbalancing:ModifyTargetGroupAttributes",
          "elasticloadbalancing:DeleteTargetGroup"
        ],
        "Resource": "*",
        "Condition": {
          "Null": {
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:RegisterTargets",
          "elasticloadbalancing:DeregisterTargets"
        ],
        "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:SetWebAcl",
          "elasticloadbalancing:ModifyListener",
          "elasticloadbalancing:AddListenerCertificates",
          "elasticloadbalancing:RemoveListenerCertificates",
          "elasticloadbalancing:ModifyRule"
        ],
        "Resource": "*"
      }
    ]
  })
}


# common
terraform {
  required_version = ">= 0.12.26"
  required_providers {
    aws = {
      "source"  = "hashicorp/aws"
      "version" = ">= 2.46.0"
    }
  }
  backend "s3" {
    bucket                  = "mikoto2000-terraform-state"
    key                     = "network.tf"
    workspace_key_prefix    = "eclipse-che.k8s.local"
    region                  = "ap-northeast-1"
  }
}

provider "aws" {
  region = "ap-northeast-1"
}
cd ${WORK_DIR}/terraform
terraform init
terraform plan
terraform apply

k8s クラスタデプロイに必要な情報を kOps に渡すために、 Terraform のデプロイ結果を json ファイルに出力。

mkdir ${WORK_DIR}/kops
terraform output -json > ${WORK_DIR}/kops/infra.json

kOps で Kubernetes クラスタをデプロイ

テンプレートを生成

以下コマンドでマニフェストを作成。

cd ${WORK_DIR}/kops

kops create cluster \
  --disable-subnet-tags \
  --vpc $(cat infra.json | jq -r .vpc_id.value) \
  --subnets  $(cat infra.json | jq -r .subnets.value[0]) \
  --zones="ap-northeast-1a" \
  --dry-run \
  -o yaml > manifest-template.yaml

このマニフェストを元に以下修正を加え、テンプレートを作る。

  • 静的 IP 割り当て用ポリシー追加
  • クラスタオートスケーラー用ポリシー追加
  • クラスタオートスケーラー用ラベル(cloudLabels)の追加
  • クラスタ名をプレースホルダへ変更
  • IP アドレス系の値をプレースホルダへ変更
  • ID 系をプレースホルダへ変更

${WORK_DIR}/kops/manifest-template.yaml

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  name: {{.cluster_name.value}}
spec:
  DisableSubnetTags: true
  api:
    loadBalancer:
      class: Classic
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://{{.cluster_name.value}}-kops-state-store/{{.cluster_name.value}}
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-ap-northeast-1a
      name: a
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-ap-northeast-1a
      name: a
    memoryRequest: 100Mi
    name: events
  externalPolicies:
    node:
    - {{.AWSLoadBalancerControllerIAMPolicy_arn.value}}
    - {{.AmazonEKSClusterAutoscalerPolicy_arn.value}}
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.20.6
  masterPublicName: api.{{.cluster_name.value}}
  networkCIDR: {{.vpc_cidr_block.value}}
  networkID: {{.vpc_id.value}}
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: {{ index . "subnet_ap-northeast-1a_cidr_block" "value" }}
    id: {{ index . "subnet_ap-northeast-1a_id" "value" }}
    name: ap-northeast-1a
    type: Public
    zone: ap-northeast-1a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: {{.cluster_name.value}}
  name: master-ap-northeast-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210415
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-ap-northeast-1a
  role: Master
  subnets:
  - ap-northeast-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: {{.cluster_name.value}}
  name: nodes-ap-northeast-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210415
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-ap-northeast-1a
  # See: [Prerequisites - Cluster Autoscaler - Amazon EKS](https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/cluster-autoscaler.html#ca-prerequisites)
  cloudLabels:
    k8s.io/cluster-autoscaler/{{.cluster_name.value}}: owned
    k8s.io/cluster-autoscaler/enabled: "TRUE"
  role: Node
  subnets:
  - ap-northeast-1a

テンプレートに Terraform の結果を流し込んでデプロイ

cd ${WORK_DIR}/kops

kops toolbox template --name ${KOPS_CLUSTER_NAME} --values ./infra.json --template ./manifest-template.yaml --format-yaml > manifest.yaml

kops create -f manifest.yaml
kops create secret --name ${KOPS_CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub
kops update cluster --yes

kops export kubecfg --admin
kops validate cluster --wait 10m

AWS Load Balancer Controller のインストール

mkdir ${WORK_DIR}/lb
cd ${WORK_DIR}/lb

git clone --depth 1 https://github.com/aws/eks-charts
kubectl apply -k ./eks-charts/stable/aws-load-balancer-controller/crds/

helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
    -n kube-system \
    --set clusterName=${KOPS_CLUSTER_NAME}

Ingress のインストール

mkdir ${WORK_DIR}/ingress
cd ${WORK_DIR}/ingress
curl -o ingress.yaml -L https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.0/deploy/static/provider/aws/deploy.yaml

ダウンロードしたマニフェストに、固定 IP 用の設定を追加する。

...(snip)
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-0562e26282f012ea8
...(snip)

修正したマニフェストを用いてデプロイ。

kubectl apply -f ./ingress.yaml

Ingress のサービスに固定 IP が割り当てられるまで待つ。

CHE_HOST_NAME="$(kubectl get services --namespace ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[0].hostname}')"
DOMAIN_IP=$(dig +short ${CHE_HOST_NAME})
while [ "${DOMAIN_IP}" == "" ] ; do
  echo "waiting..."
  sleep 10
  DOMAIN_IP=$(dig +short ${CHE_HOST_NAME})
done

DOMAIN="$(echo ${DOMAIN_IP} | sed -e "s/\./-/g").nip.io"

Cluster Autoscaler のインストール

mkdir ${WORK_DIR}/cluster-autoscale
cd ${WORK_DIR}/cluster-autoscale
curl -L -O https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

ダウンロードしたマニフェストから以下ブロックを削除。

          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes
              readOnly: true

apply。

kubectl apply -f ./cluster-autoscaler-autodiscover.yaml

Eclipse Che のインストール

cd ${WORK_DIR}

kubectl create namespace che

kubectl create secret generic self-signed-certificate --from-file=che_demo_ca/client/ca.crt -n che
kubectl create secret generic che-tls --from-file=ca.crt=che_demo_ca/client/ca.crt --from-file=tls.key=che_demo_ca/client/server-35-74-43-49.nip.io.key --from-file=tls.crt=che_demo_ca/client/server-35-74-43-49.nip.io.crt -n che

git clone --depth 1 https://github.com/eclipse-che/che-server
cd che-server/deploy/kubernetes/helm/che/

sed -i -e 's/serverStrategy: multi-host/serverStrategy: single-host/g' values.yaml
sed -i -e 's/  singleHostExposure: native/  singleHostExposure: gateway/g' values.yaml
sed -i -e 's/  deploy: false/  deploy: true/g' values.yaml

helm dependency update
helm upgrade --install che --namespace che --set global.ingressDomain=webide.${DOMAIN} --set global.multiuser="false" --set global.tls.enabled="false" ./

http://webide.35-74-43-49.nip.io へアクセス。 OK。

後はプラグインダウンロードに必要なドメインを、ファイアウォールに追加していけば良さそう。

運用では生成した yaml をバージョン管理すればいいのかな?

後片付け

cd ${WORK_DIR}/terraform
kops delete cluster --yes & terraform destroy -auto-approve

その他メモ

リスタート

kubectl rollout restart -n che             deployments/che
kubectl rollout restart -n che             deployments/che-dashboard
kubectl rollout restart -n che             deployments/devfile-registry
kubectl rollout restart -n che             deployments/keycloak
kubectl rollout restart -n che             deployments/plugin-registry
kubectl rollout restart -n che             deployments/postgres
kubectl rollout restart -n ingress-nginx   deployments/ingress-nginx-controller

参考資料

2021年5月19日水曜日

kOps と Terraform を組み合わせて AWS の Kubernetes インフラを管理する

前にファイアウォールを追加したみたいに、 kOps でデプロイした VPC, サブネットを編集すると、 kOps でクラスタを更新する際に、後乗せの修正が kOps によってリセットされてしまう。

それを防ぐために、 VPC, サブネットなどのインフラ周りは Terraform で管理し、 クラスタに関連するインスタンスのみを kOps に管理してもらうようにしたい。

そんなわけで、 kOps と Terraform の組み合わせを検証。

前提

管理方針

以下役割分担で管理していく。

  • VPC など、ネットワーク周りは Terraform
  • EC2 インスタンスなど、k8s クラスタ周りは kOps

管理用 IAM ユーザー作成

ユーザーグループ追加

  • ユーザーグループ名: webide-admin
  • アクセス許可ポリシー
    • AmazonEC2FullAccess
    • AmazonS3FullAccess
    • AmazonVPCFullAccess
    • インラインポリシーで Network Firewall にフルアクセスできるように設定

ユーザー追加

  • ユーザー名: webide-admin
  • アクセスの種類: プログラムによるアクセス
  • アクセス許可の設定: webide-admin グループに追加

アクセスキー ID とシークレットアクセスキーをメモ。

必要な情報整理

PREFIX=eclipse-che
export KOPS_CLUSTER_NAME=${PREFIX}.k8s.local

# for Kops
export KOPS_STATE_STORE=s3://${KOPS_CLUSTER_NAME}-kops-state-store

# for Terraform
export TF_BACKEND=s3://${KOPS_CLUSTER_NAME}-kops-state-store

kOps 用 Docker コンテナ起動

cd ${WORK_DIR}
docker run -it --rm -v "$(pwd):/work" --workdir "/work" mikoto2000/eks-tools

kOps で tf ファイルを作成

kOps 用コンテナ上で実施。

aws-cli の設定

aws configure コマンドで、 アクセスキーキー情報とリージョン・出力形式を設定する。

bash-4.2# aws configure
AWS Access Key ID [None]: xxxxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
Default region name [None]: ap-northeast-1
Default output format [None]: json

SSH 鍵作成

ssh-keygen

kops 管理情報保存用の s3 を作成

aws s3 mb ${KOPS_STATE_STORE}

Kops で Terraform のひな形を作成

kops create cluster \
  --name=${KOPS_CLUSTER_NAME} \
  --state=${KOPS_STATE_STORE} \
  --zones="ap-northeast-1a" \
  --out=. \
  --target=terraform

以下の tf ファイルが出力される。

kubernetes.tf

locals {
  cluster_name                 = "eclipse-che.k8s.local"
  master_autoscaling_group_ids = [aws_autoscaling_group.master-ap-northeast-1a-masters-eclipse-che-k8s-local.id]
  master_security_group_ids    = [aws_security_group.masters-eclipse-che-k8s-local.id]
  masters_role_arn             = aws_iam_role.masters-eclipse-che-k8s-local.arn
  masters_role_name            = aws_iam_role.masters-eclipse-che-k8s-local.name
  node_autoscaling_group_ids   = [aws_autoscaling_group.nodes-ap-northeast-1a-eclipse-che-k8s-local.id]
  node_security_group_ids      = [aws_security_group.nodes-eclipse-che-k8s-local.id]
  node_subnet_ids              = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  nodes_role_arn               = aws_iam_role.nodes-eclipse-che-k8s-local.arn
  nodes_role_name              = aws_iam_role.nodes-eclipse-che-k8s-local.name
  region                       = "ap-northeast-1"
  route_table_public_id        = aws_route_table.eclipse-che-k8s-local.id
  subnet_ap-northeast-1a_id    = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
  vpc_cidr_block               = aws_vpc.eclipse-che-k8s-local.cidr_block
  vpc_id                       = aws_vpc.eclipse-che-k8s-local.id
}

output "cluster_name" {
  value = "eclipse-che.k8s.local"
}

output "master_autoscaling_group_ids" {
  value = [aws_autoscaling_group.master-ap-northeast-1a-masters-eclipse-che-k8s-local.id]
}

output "master_security_group_ids" {
  value = [aws_security_group.masters-eclipse-che-k8s-local.id]
}

output "masters_role_arn" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.arn
}

output "masters_role_name" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.name
}

output "node_autoscaling_group_ids" {
  value = [aws_autoscaling_group.nodes-ap-northeast-1a-eclipse-che-k8s-local.id]
}

output "node_security_group_ids" {
  value = [aws_security_group.nodes-eclipse-che-k8s-local.id]
}

output "node_subnet_ids" {
  value = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

output "nodes_role_arn" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.arn
}

output "nodes_role_name" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

output "region" {
  value = "ap-northeast-1"
}

output "route_table_public_id" {
  value = aws_route_table.eclipse-che-k8s-local.id
}

output "subnet_ap-northeast-1a_id" {
  value = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

output "vpc_cidr_block" {
  value = aws_vpc.eclipse-che-k8s-local.cidr_block
}

output "vpc_id" {
  value = aws_vpc.eclipse-che-k8s-local.id
}

provider "aws" {
  region = "ap-northeast-1"
}

resource "aws_autoscaling_group" "master-ap-northeast-1a-masters-eclipse-che-k8s-local" {
  enabled_metrics = ["GroupDesiredCapacity", "GroupInServiceInstances", "GroupMaxSize", "GroupMinSize", "GroupPendingInstances", "GroupStandbyInstances", "GroupTerminatingInstances", "GroupTotalInstances"]
  launch_template {
    id      = aws_launch_template.master-ap-northeast-1a-masters-eclipse-che-k8s-local.id
    version = aws_launch_template.master-ap-northeast-1a-masters-eclipse-che-k8s-local.latest_version
  }
  load_balancers      = [aws_elb.api-eclipse-che-k8s-local.id]
  max_size            = 1
  metrics_granularity = "1Minute"
  min_size            = 1
  name                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
  tag {
    key                 = "KubernetesCluster"
    propagate_at_launch = true
    value               = "eclipse-che.k8s.local"
  }
  tag {
    key                 = "Name"
    propagate_at_launch = true
    value               = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "master-ap-northeast-1a"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"
    propagate_at_launch = true
    value               = "master"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"
    propagate_at_launch = true
    value               = ""
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"
    propagate_at_launch = true
    value               = ""
  }
  tag {
    key                 = "k8s.io/role/master"
    propagate_at_launch = true
    value               = "1"
  }
  tag {
    key                 = "kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "master-ap-northeast-1a"
  }
  tag {
    key                 = "kubernetes.io/cluster/eclipse-che.k8s.local"
    propagate_at_launch = true
    value               = "owned"
  }
  vpc_zone_identifier = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

resource "aws_autoscaling_group" "nodes-ap-northeast-1a-eclipse-che-k8s-local" {
  enabled_metrics = ["GroupDesiredCapacity", "GroupInServiceInstances", "GroupMaxSize", "GroupMinSize", "GroupPendingInstances", "GroupStandbyInstances", "GroupTerminatingInstances", "GroupTotalInstances"]
  launch_template {
    id      = aws_launch_template.nodes-ap-northeast-1a-eclipse-che-k8s-local.id
    version = aws_launch_template.nodes-ap-northeast-1a-eclipse-che-k8s-local.latest_version
  }
  max_size            = 1
  metrics_granularity = "1Minute"
  min_size            = 1
  name                = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
  tag {
    key                 = "KubernetesCluster"
    propagate_at_launch = true
    value               = "eclipse-che.k8s.local"
  }
  tag {
    key                 = "Name"
    propagate_at_launch = true
    value               = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "nodes-ap-northeast-1a"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"
    propagate_at_launch = true
    value               = "node"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node"
    propagate_at_launch = true
    value               = ""
  }
  tag {
    key                 = "k8s.io/role/node"
    propagate_at_launch = true
    value               = "1"
  }
  tag {
    key                 = "kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "nodes-ap-northeast-1a"
  }
  tag {
    key                 = "kubernetes.io/cluster/eclipse-che.k8s.local"
    propagate_at_launch = true
    value               = "owned"
  }
  vpc_zone_identifier = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

resource "aws_ebs_volume" "a-etcd-events-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  encrypted         = true
  iops              = 3000
  size              = 20
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "a.etcd-events.eclipse-che.k8s.local"
    "k8s.io/etcd/events"                          = "a/a"
    "k8s.io/role/master"                          = "1"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  throughput = 125
  type       = "gp3"
}

resource "aws_ebs_volume" "a-etcd-main-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  encrypted         = true
  iops              = 3000
  size              = 20
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "a.etcd-main.eclipse-che.k8s.local"
    "k8s.io/etcd/main"                            = "a/a"
    "k8s.io/role/master"                          = "1"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  throughput = 125
  type       = "gp3"
}

resource "aws_elb" "api-eclipse-che-k8s-local" {
  cross_zone_load_balancing = false
  health_check {
    healthy_threshold   = 2
    interval            = 10
    target              = "SSL:443"
    timeout             = 5
    unhealthy_threshold = 2
  }
  idle_timeout = 300
  listener {
    instance_port     = 443
    instance_protocol = "TCP"
    lb_port           = 443
    lb_protocol       = "TCP"
  }
  name            = "api-eclipse-che-k8s-local-hqrpnr"
  security_groups = [aws_security_group.api-elb-eclipse-che-k8s-local.id]
  subnets         = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_instance_profile" "masters-eclipse-che-k8s-local" {
  name = "masters.eclipse-che.k8s.local"
  role = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_instance_profile" "nodes-eclipse-che-k8s-local" {
  name = "nodes.eclipse-che.k8s.local"
  role = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "masters-eclipse-che-k8s-local" {
  name   = "masters.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_masters.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "nodes-eclipse-che-k8s-local" {
  name   = "nodes.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_nodes.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role" "masters-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_masters.eclipse-che.k8s.local_policy")
  name               = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_role" "nodes-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_nodes.eclipse-che.k8s.local_policy")
  name               = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_internet_gateway" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_key_pair" "kubernetes-eclipse-che-k8s-local-45ccad7fdfb05b1a495b8317891b89f9" {
  key_name   = "kubernetes.eclipse-che.k8s.local-45:cc:ad:7f:df:b0:5b:1a:49:5b:83:17:89:1b:89:f9"
  public_key = file("${path.module}/data/aws_key_pair_kubernetes.eclipse-che.k8s.local-45ccad7fdfb05b1a495b8317891b89f9_public_key")
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_launch_template" "master-ap-northeast-1a-masters-eclipse-che-k8s-local" {
  block_device_mappings {
    device_name = "/dev/sda1"
    ebs {
      delete_on_termination = true
      encrypted             = true
      iops                  = 3000
      throughput            = 125
      volume_size           = 64
      volume_type           = "gp3"
    }
  }
  iam_instance_profile {
    name = aws_iam_instance_profile.masters-eclipse-che-k8s-local.id
  }
  image_id      = "ami-08b73b6a3bb95a35a"
  instance_type = "t3.medium"
  key_name      = aws_key_pair.kubernetes-eclipse-che-k8s-local-45ccad7fdfb05b1a495b8317891b89f9.id
  lifecycle {
    create_before_destroy = true
  }
  metadata_options {
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
    http_tokens                 = "optional"
  }
  name = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
  network_interfaces {
    associate_public_ip_address = true
    delete_on_termination       = true
    security_groups             = [aws_security_group.masters-eclipse-che-k8s-local.id]
  }
  tag_specifications {
    resource_type = "instance"
    tags = {
      "KubernetesCluster"                                                                   = "eclipse-che.k8s.local"
      "Name"                                                                                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"             = "master-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                    = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"        = ""
      "k8s.io/role/master"                                                                  = "1"
      "kops.k8s.io/instancegroup"                                                           = "master-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                         = "owned"
    }
  }
  tag_specifications {
    resource_type = "volume"
    tags = {
      "KubernetesCluster"                                                                   = "eclipse-che.k8s.local"
      "Name"                                                                                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"             = "master-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                    = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"        = ""
      "k8s.io/role/master"                                                                  = "1"
      "kops.k8s.io/instancegroup"                                                           = "master-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                         = "owned"
    }
  }
  tags = {
    "KubernetesCluster"                                                                   = "eclipse-che.k8s.local"
    "Name"                                                                                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
    "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"             = "master-ap-northeast-1a"
    "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                    = "master"
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"        = ""
    "k8s.io/role/master"                                                                  = "1"
    "kops.k8s.io/instancegroup"                                                           = "master-ap-northeast-1a"
    "kubernetes.io/cluster/eclipse-che.k8s.local"                                         = "owned"
  }
  user_data = filebase64("${path.module}/data/aws_launch_template_master-ap-northeast-1a.masters.eclipse-che.k8s.local_user_data")
}

resource "aws_launch_template" "nodes-ap-northeast-1a-eclipse-che-k8s-local" {
  block_device_mappings {
    device_name = "/dev/sda1"
    ebs {
      delete_on_termination = true
      encrypted             = true
      iops                  = 3000
      throughput            = 125
      volume_size           = 128
      volume_type           = "gp3"
    }
  }
  iam_instance_profile {
    name = aws_iam_instance_profile.nodes-eclipse-che-k8s-local.id
  }
  image_id      = "ami-08b73b6a3bb95a35a"
  instance_type = "t3.medium"
  key_name      = aws_key_pair.kubernetes-eclipse-che-k8s-local-45ccad7fdfb05b1a495b8317891b89f9.id
  lifecycle {
    create_before_destroy = true
  }
  metadata_options {
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
    http_tokens                 = "optional"
  }
  name = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
  network_interfaces {
    associate_public_ip_address = true
    delete_on_termination       = true
    security_groups             = [aws_security_group.nodes-eclipse-che-k8s-local.id]
  }
  tag_specifications {
    resource_type = "instance"
    tags = {
      "KubernetesCluster"                                                          = "eclipse-che.k8s.local"
      "Name"                                                                       = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"    = "nodes-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"           = "node"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node" = ""
      "k8s.io/role/node"                                                           = "1"
      "kops.k8s.io/instancegroup"                                                  = "nodes-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                = "owned"
    }
  }
  tag_specifications {
    resource_type = "volume"
    tags = {
      "KubernetesCluster"                                                          = "eclipse-che.k8s.local"
      "Name"                                                                       = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"    = "nodes-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"           = "node"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node" = ""
      "k8s.io/role/node"                                                           = "1"
      "kops.k8s.io/instancegroup"                                                  = "nodes-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                = "owned"
    }
  }
  tags = {
    "KubernetesCluster"                                                          = "eclipse-che.k8s.local"
    "Name"                                                                       = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
    "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"    = "nodes-ap-northeast-1a"
    "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"           = "node"
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node" = ""
    "k8s.io/role/node"                                                           = "1"
    "kops.k8s.io/instancegroup"                                                  = "nodes-ap-northeast-1a"
    "kubernetes.io/cluster/eclipse-che.k8s.local"                                = "owned"
  }
  user_data = filebase64("${path.module}/data/aws_launch_template_nodes-ap-northeast-1a.eclipse-che.k8s.local_user_data")
}

resource "aws_route_table_association" "ap-northeast-1a-eclipse-che-k8s-local" {
  route_table_id = aws_route_table.eclipse-che-k8s-local.id
  subnet_id      = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

resource "aws_route_table" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_route" "route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.eclipse-che-k8s-local.id
  route_table_id         = aws_route_table.eclipse-che-k8s-local.id
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-masters-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-nodes-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-443to443-api-elb-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 443
  type              = "ingress"
}

resource "aws_security_group_rule" "from-api-elb-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-masters-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-1to2379-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 2379
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-2382to4000-masters-eclipse-che-k8s-local" {
  from_port                = 2382
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 4000
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-4003to65535-masters-eclipse-che-k8s-local" {
  from_port                = 4003
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-udp-1to65535-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "udp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "https-elb-to-master" {
  from_port                = 443
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port                  = 443
  type                     = "ingress"
}

resource "aws_security_group_rule" "icmp-pmtu-api-elb-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 3
  protocol          = "icmp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 4
  type              = "ingress"
}

resource "aws_security_group" "api-elb-eclipse-che-k8s-local" {
  description = "Security group for api ELB"
  name        = "api-elb.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api-elb.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "masters-eclipse-che-k8s-local" {
  description = "Security group for masters"
  name        = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "nodes-eclipse-che-k8s-local" {
  description = "Security group for nodes"
  name        = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_subnet" "ap-northeast-1a-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "172.20.32.0/19"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "ap-northeast-1a.eclipse-che.k8s.local"
    "SubnetType"                                  = "Public"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options_association" "eclipse-che-k8s-local" {
  dhcp_options_id = aws_vpc_dhcp_options.eclipse-che-k8s-local.id
  vpc_id          = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options" "eclipse-che-k8s-local" {
  domain_name         = "ap-northeast-1.compute.internal"
  domain_name_servers = ["AmazonProvidedDNS"]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_vpc" "eclipse-che-k8s-local" {
  cidr_block           = "172.20.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

terraform {
  required_version = ">= 0.12.26"
  required_providers {
    aws = {
      "source"  = "hashicorp/aws"
      "version" = ">= 2.46.0"
    }
  }
}

ここからクラスタ関連の定義を削除。

kubernetes.tf

locals {
  cluster_name                 = "eclipse-che.k8s.local"
  master_security_group_ids    = [aws_security_group.masters-eclipse-che-k8s-local.id]
  masters_role_arn             = aws_iam_role.masters-eclipse-che-k8s-local.arn
  masters_role_name            = aws_iam_role.masters-eclipse-che-k8s-local.name
  node_security_group_ids      = [aws_security_group.nodes-eclipse-che-k8s-local.id]
  node_subnet_ids              = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  nodes_role_arn               = aws_iam_role.nodes-eclipse-che-k8s-local.arn
  nodes_role_name              = aws_iam_role.nodes-eclipse-che-k8s-local.name
  region                       = "ap-northeast-1"
  route_table_public_id        = aws_route_table.eclipse-che-k8s-local.id
  subnet_ap-northeast-1a_id    = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
  vpc_cidr_block               = aws_vpc.eclipse-che-k8s-local.cidr_block
  vpc_id                       = aws_vpc.eclipse-che-k8s-local.id
}

output "cluster_name" {
  value = "eclipse-che.k8s.local"
}

output "master_security_group_ids" {
  value = [aws_security_group.masters-eclipse-che-k8s-local.id]
}

output "masters_role_arn" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.arn
}

output "masters_role_name" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.name
}

output "node_security_group_ids" {
  value = [aws_security_group.nodes-eclipse-che-k8s-local.id]
}

output "node_subnet_ids" {
  value = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

output "nodes_role_arn" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.arn
}

output "nodes_role_name" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

output "region" {
  value = "ap-northeast-1"
}

output "route_table_public_id" {
  value = aws_route_table.eclipse-che-k8s-local.id
}

output "subnet_ap-northeast-1a_id" {
  value = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

output "vpc_cidr_block" {
  value = aws_vpc.eclipse-che-k8s-local.cidr_block
}

output "vpc_id" {
  value = aws_vpc.eclipse-che-k8s-local.id
}

provider "aws" {
  region = "ap-northeast-1"
}

resource "aws_elb" "api-eclipse-che-k8s-local" {
  cross_zone_load_balancing = false
  health_check {
    healthy_threshold   = 2
    interval            = 10
    target              = "SSL:443"
    timeout             = 5
    unhealthy_threshold = 2
  }
  idle_timeout = 300
  listener {
    instance_port     = 443
    instance_protocol = "TCP"
    lb_port           = 443
    lb_protocol       = "TCP"
  }
  name            = "api-eclipse-che-k8s-local-hqrpnr"
  security_groups = [aws_security_group.api-elb-eclipse-che-k8s-local.id]
  subnets         = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_instance_profile" "masters-eclipse-che-k8s-local" {
  name = "masters.eclipse-che.k8s.local"
  role = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_instance_profile" "nodes-eclipse-che-k8s-local" {
  name = "nodes.eclipse-che.k8s.local"
  role = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "masters-eclipse-che-k8s-local" {
  name   = "masters.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_masters.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "nodes-eclipse-che-k8s-local" {
  name   = "nodes.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_nodes.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role" "masters-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_masters.eclipse-che.k8s.local_policy")
  name               = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_role" "nodes-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_nodes.eclipse-che.k8s.local_policy")
  name               = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_internet_gateway" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_route_table_association" "ap-northeast-1a-eclipse-che-k8s-local" {
  route_table_id = aws_route_table.eclipse-che-k8s-local.id
  subnet_id      = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

resource "aws_route_table" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_route" "route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.eclipse-che-k8s-local.id
  route_table_id         = aws_route_table.eclipse-che-k8s-local.id
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-masters-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-nodes-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-443to443-api-elb-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 443
  type              = "ingress"
}

resource "aws_security_group_rule" "from-api-elb-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-masters-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-1to2379-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 2379
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-2382to4000-masters-eclipse-che-k8s-local" {
  from_port                = 2382
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 4000
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-4003to65535-masters-eclipse-che-k8s-local" {
  from_port                = 4003
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-udp-1to65535-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "udp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "https-elb-to-master" {
  from_port                = 443
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port                  = 443
  type                     = "ingress"
}

resource "aws_security_group_rule" "icmp-pmtu-api-elb-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 3
  protocol          = "icmp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 4
  type              = "ingress"
}

resource "aws_security_group" "api-elb-eclipse-che-k8s-local" {
  description = "Security group for api ELB"
  name        = "api-elb.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api-elb.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "masters-eclipse-che-k8s-local" {
  description = "Security group for masters"
  name        = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "nodes-eclipse-che-k8s-local" {
  description = "Security group for nodes"
  name        = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_subnet" "ap-northeast-1a-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "172.20.32.0/19"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "ap-northeast-1a.eclipse-che.k8s.local"
    "SubnetType"                                  = "Public"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options_association" "eclipse-che-k8s-local" {
  dhcp_options_id = aws_vpc_dhcp_options.eclipse-che-k8s-local.id
  vpc_id          = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options" "eclipse-che-k8s-local" {
  domain_name         = "ap-northeast-1.compute.internal"
  domain_name_servers = ["AmazonProvidedDNS"]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_vpc" "eclipse-che-k8s-local" {
  cidr_block           = "172.20.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

terraform {
  required_version = ">= 0.12.26"
  required_providers {
    aws = {
      "source"  = "hashicorp/aws"
      "version" = ">= 2.46.0"
    }
  }
}

Terraform によるインフラデプロイ

Docker コンテナ起動

docker run -it --rm -v "$(pwd):/work" --workdir "/work" --entrypoint sh hashicorp/terraform:0.15.3

デプロイ

export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
terraform init
terraform plan
terraform apply

kOps による k8s クラスタデプロイ

${KOPS_STATE_STORE} を空にし、 Terraform で作成したサブネットにアサインしつつ再作成。

VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag-value,Values=${KOPS_CLUSTER_NAME} --query "Vpcs[0].VpcId" --output text)
K8S_SUBNET_ID=$(aws ec2 describe-subnets --filters Name=tag-value,Values=${KOPS_CLUSTER_NAME} --query "Subnets[0].SubnetId" --output text)

kops create cluster --name ${KOPS_CLUSTER_NAME} \
  --dry-run=true \
  --zones="ap-northeast-1a" \
  --subnets=${K8S_SUBNET} \
  --output=yaml > manifest.yaml

manifest.yaml を更新。

...(snip)
  subnets:
  - cidr: 172.20.32.0/19
    name: ap-northeast-1a
    type: Public
    zone: ap-northeast-1a
    id: subnet-057982f9b0a3ca02a # ${K8S_SUBNET_ID} を設定
...(snip)
kops create -f manifest.yaml
kops create secret --name ${KOPS_CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub
kops update cluster --yes
kops export kubecfg --admin
kops validate cluster --wait 10m

Ingress のインストール

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.0/deploy/static/provider/aws/deploy.yaml
CHE_HOST_NAME="$(kubectl get services --namespace ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[0].hostname}')"
DOMAIN_IP="$(dig +short ${CHE_HOST_NAME})"
DOMAIN="$(echo ${DOMAIN_IP} | sed -e "s/\./-/g").nip.io"

Eclipse Che のインストール

git clone --depth 1 https://github.com/eclipse/che
cd che/deploy/kubernetes/helm/che/
helm dependency update
kubectl create namespace che
helm upgrade --install che --namespace che --set global.ingressDomain=${DOMAIN} ./

http://che-che.${DOMAIN} へアクセス。動いた。

停止。

$ kops get
Cluster
NAME                    CLOUD   ZONES
eclipse-che.k8s.local   aws     ap-northeast-1a

Instance Groups
NAME                    ROLE    MACHINETYPE     MIN     MAX     ZONES
master-ap-northeast-1a  Master  t3.medium       0       0       ap-northeast-1a
nodes-ap-northeast-1a   Node    t3.medium       0       0       ap-northeast-1a

$ kops edit ig master-ap-northeast-1a
...(mazSize, minSize を 0 に更新)

$ kops edit ig nodes-ap-northeast-1a
...(mazSize, minSize を 0 に更新)

$ kops update cluster --yes

コンソールでインスタンスがなくなることを確認。 OK.

ファイアウォールを追加して、目論見通りいくか確認するのはまた今度とする。

参考資料