2021年5月19日水曜日

kOps と Terraform を組み合わせて AWS の Kubernetes インフラを管理する

前にファイアウォールを追加したみたいに、 kOps でデプロイした VPC, サブネットを編集すると、 kOps でクラスタを更新する際に、後乗せの修正が kOps によってリセットされてしまう。

それを防ぐために、 VPC, サブネットなどのインフラ周りは Terraform で管理し、 クラスタに関連するインスタンスのみを kOps に管理してもらうようにしたい。

そんなわけで、 kOps と Terraform の組み合わせを検証。

前提

管理方針

以下役割分担で管理していく。

  • VPC など、ネットワーク周りは Terraform
  • EC2 インスタンスなど、k8s クラスタ周りは kOps

管理用 IAM ユーザー作成

ユーザーグループ追加

  • ユーザーグループ名: webide-admin
  • アクセス許可ポリシー
    • AmazonEC2FullAccess
    • AmazonS3FullAccess
    • AmazonVPCFullAccess
    • インラインポリシーで Network Firewall にフルアクセスできるように設定

ユーザー追加

  • ユーザー名: webide-admin
  • アクセスの種類: プログラムによるアクセス
  • アクセス許可の設定: webide-admin グループに追加

アクセスキー ID とシークレットアクセスキーをメモ。

必要な情報整理

PREFIX=eclipse-che
export KOPS_CLUSTER_NAME=${PREFIX}.k8s.local

# for Kops
export KOPS_STATE_STORE=s3://${KOPS_CLUSTER_NAME}-kops-state-store

# for Terraform
export TF_BACKEND=s3://${KOPS_CLUSTER_NAME}-kops-state-store

kOps 用 Docker コンテナ起動

cd ${WORK_DIR}
docker run -it --rm -v "$(pwd):/work" --workdir "/work" mikoto2000/eks-tools

kOps で tf ファイルを作成

kOps 用コンテナ上で実施。

aws-cli の設定

aws configure コマンドで、 アクセスキーキー情報とリージョン・出力形式を設定する。

bash-4.2# aws configure
AWS Access Key ID [None]: xxxxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
Default region name [None]: ap-northeast-1
Default output format [None]: json

SSH 鍵作成

ssh-keygen

kops 管理情報保存用の s3 を作成

aws s3 mb ${KOPS_STATE_STORE}

Kops で Terraform のひな形を作成

kops create cluster \
  --name=${KOPS_CLUSTER_NAME} \
  --state=${KOPS_STATE_STORE} \
  --zones="ap-northeast-1a" \
  --out=. \
  --target=terraform

以下の tf ファイルが出力される。

kubernetes.tf

locals {
  cluster_name                 = "eclipse-che.k8s.local"
  master_autoscaling_group_ids = [aws_autoscaling_group.master-ap-northeast-1a-masters-eclipse-che-k8s-local.id]
  master_security_group_ids    = [aws_security_group.masters-eclipse-che-k8s-local.id]
  masters_role_arn             = aws_iam_role.masters-eclipse-che-k8s-local.arn
  masters_role_name            = aws_iam_role.masters-eclipse-che-k8s-local.name
  node_autoscaling_group_ids   = [aws_autoscaling_group.nodes-ap-northeast-1a-eclipse-che-k8s-local.id]
  node_security_group_ids      = [aws_security_group.nodes-eclipse-che-k8s-local.id]
  node_subnet_ids              = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  nodes_role_arn               = aws_iam_role.nodes-eclipse-che-k8s-local.arn
  nodes_role_name              = aws_iam_role.nodes-eclipse-che-k8s-local.name
  region                       = "ap-northeast-1"
  route_table_public_id        = aws_route_table.eclipse-che-k8s-local.id
  subnet_ap-northeast-1a_id    = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
  vpc_cidr_block               = aws_vpc.eclipse-che-k8s-local.cidr_block
  vpc_id                       = aws_vpc.eclipse-che-k8s-local.id
}

output "cluster_name" {
  value = "eclipse-che.k8s.local"
}

output "master_autoscaling_group_ids" {
  value = [aws_autoscaling_group.master-ap-northeast-1a-masters-eclipse-che-k8s-local.id]
}

output "master_security_group_ids" {
  value = [aws_security_group.masters-eclipse-che-k8s-local.id]
}

output "masters_role_arn" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.arn
}

output "masters_role_name" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.name
}

output "node_autoscaling_group_ids" {
  value = [aws_autoscaling_group.nodes-ap-northeast-1a-eclipse-che-k8s-local.id]
}

output "node_security_group_ids" {
  value = [aws_security_group.nodes-eclipse-che-k8s-local.id]
}

output "node_subnet_ids" {
  value = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

output "nodes_role_arn" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.arn
}

output "nodes_role_name" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

output "region" {
  value = "ap-northeast-1"
}

output "route_table_public_id" {
  value = aws_route_table.eclipse-che-k8s-local.id
}

output "subnet_ap-northeast-1a_id" {
  value = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

output "vpc_cidr_block" {
  value = aws_vpc.eclipse-che-k8s-local.cidr_block
}

output "vpc_id" {
  value = aws_vpc.eclipse-che-k8s-local.id
}

provider "aws" {
  region = "ap-northeast-1"
}

resource "aws_autoscaling_group" "master-ap-northeast-1a-masters-eclipse-che-k8s-local" {
  enabled_metrics = ["GroupDesiredCapacity", "GroupInServiceInstances", "GroupMaxSize", "GroupMinSize", "GroupPendingInstances", "GroupStandbyInstances", "GroupTerminatingInstances", "GroupTotalInstances"]
  launch_template {
    id      = aws_launch_template.master-ap-northeast-1a-masters-eclipse-che-k8s-local.id
    version = aws_launch_template.master-ap-northeast-1a-masters-eclipse-che-k8s-local.latest_version
  }
  load_balancers      = [aws_elb.api-eclipse-che-k8s-local.id]
  max_size            = 1
  metrics_granularity = "1Minute"
  min_size            = 1
  name                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
  tag {
    key                 = "KubernetesCluster"
    propagate_at_launch = true
    value               = "eclipse-che.k8s.local"
  }
  tag {
    key                 = "Name"
    propagate_at_launch = true
    value               = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "master-ap-northeast-1a"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"
    propagate_at_launch = true
    value               = "master"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"
    propagate_at_launch = true
    value               = ""
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"
    propagate_at_launch = true
    value               = ""
  }
  tag {
    key                 = "k8s.io/role/master"
    propagate_at_launch = true
    value               = "1"
  }
  tag {
    key                 = "kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "master-ap-northeast-1a"
  }
  tag {
    key                 = "kubernetes.io/cluster/eclipse-che.k8s.local"
    propagate_at_launch = true
    value               = "owned"
  }
  vpc_zone_identifier = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

resource "aws_autoscaling_group" "nodes-ap-northeast-1a-eclipse-che-k8s-local" {
  enabled_metrics = ["GroupDesiredCapacity", "GroupInServiceInstances", "GroupMaxSize", "GroupMinSize", "GroupPendingInstances", "GroupStandbyInstances", "GroupTerminatingInstances", "GroupTotalInstances"]
  launch_template {
    id      = aws_launch_template.nodes-ap-northeast-1a-eclipse-che-k8s-local.id
    version = aws_launch_template.nodes-ap-northeast-1a-eclipse-che-k8s-local.latest_version
  }
  max_size            = 1
  metrics_granularity = "1Minute"
  min_size            = 1
  name                = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
  tag {
    key                 = "KubernetesCluster"
    propagate_at_launch = true
    value               = "eclipse-che.k8s.local"
  }
  tag {
    key                 = "Name"
    propagate_at_launch = true
    value               = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "nodes-ap-northeast-1a"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"
    propagate_at_launch = true
    value               = "node"
  }
  tag {
    key                 = "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node"
    propagate_at_launch = true
    value               = ""
  }
  tag {
    key                 = "k8s.io/role/node"
    propagate_at_launch = true
    value               = "1"
  }
  tag {
    key                 = "kops.k8s.io/instancegroup"
    propagate_at_launch = true
    value               = "nodes-ap-northeast-1a"
  }
  tag {
    key                 = "kubernetes.io/cluster/eclipse-che.k8s.local"
    propagate_at_launch = true
    value               = "owned"
  }
  vpc_zone_identifier = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

resource "aws_ebs_volume" "a-etcd-events-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  encrypted         = true
  iops              = 3000
  size              = 20
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "a.etcd-events.eclipse-che.k8s.local"
    "k8s.io/etcd/events"                          = "a/a"
    "k8s.io/role/master"                          = "1"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  throughput = 125
  type       = "gp3"
}

resource "aws_ebs_volume" "a-etcd-main-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  encrypted         = true
  iops              = 3000
  size              = 20
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "a.etcd-main.eclipse-che.k8s.local"
    "k8s.io/etcd/main"                            = "a/a"
    "k8s.io/role/master"                          = "1"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  throughput = 125
  type       = "gp3"
}

resource "aws_elb" "api-eclipse-che-k8s-local" {
  cross_zone_load_balancing = false
  health_check {
    healthy_threshold   = 2
    interval            = 10
    target              = "SSL:443"
    timeout             = 5
    unhealthy_threshold = 2
  }
  idle_timeout = 300
  listener {
    instance_port     = 443
    instance_protocol = "TCP"
    lb_port           = 443
    lb_protocol       = "TCP"
  }
  name            = "api-eclipse-che-k8s-local-hqrpnr"
  security_groups = [aws_security_group.api-elb-eclipse-che-k8s-local.id]
  subnets         = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_instance_profile" "masters-eclipse-che-k8s-local" {
  name = "masters.eclipse-che.k8s.local"
  role = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_instance_profile" "nodes-eclipse-che-k8s-local" {
  name = "nodes.eclipse-che.k8s.local"
  role = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "masters-eclipse-che-k8s-local" {
  name   = "masters.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_masters.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "nodes-eclipse-che-k8s-local" {
  name   = "nodes.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_nodes.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role" "masters-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_masters.eclipse-che.k8s.local_policy")
  name               = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_role" "nodes-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_nodes.eclipse-che.k8s.local_policy")
  name               = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_internet_gateway" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_key_pair" "kubernetes-eclipse-che-k8s-local-45ccad7fdfb05b1a495b8317891b89f9" {
  key_name   = "kubernetes.eclipse-che.k8s.local-45:cc:ad:7f:df:b0:5b:1a:49:5b:83:17:89:1b:89:f9"
  public_key = file("${path.module}/data/aws_key_pair_kubernetes.eclipse-che.k8s.local-45ccad7fdfb05b1a495b8317891b89f9_public_key")
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_launch_template" "master-ap-northeast-1a-masters-eclipse-che-k8s-local" {
  block_device_mappings {
    device_name = "/dev/sda1"
    ebs {
      delete_on_termination = true
      encrypted             = true
      iops                  = 3000
      throughput            = 125
      volume_size           = 64
      volume_type           = "gp3"
    }
  }
  iam_instance_profile {
    name = aws_iam_instance_profile.masters-eclipse-che-k8s-local.id
  }
  image_id      = "ami-08b73b6a3bb95a35a"
  instance_type = "t3.medium"
  key_name      = aws_key_pair.kubernetes-eclipse-che-k8s-local-45ccad7fdfb05b1a495b8317891b89f9.id
  lifecycle {
    create_before_destroy = true
  }
  metadata_options {
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
    http_tokens                 = "optional"
  }
  name = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
  network_interfaces {
    associate_public_ip_address = true
    delete_on_termination       = true
    security_groups             = [aws_security_group.masters-eclipse-che-k8s-local.id]
  }
  tag_specifications {
    resource_type = "instance"
    tags = {
      "KubernetesCluster"                                                                   = "eclipse-che.k8s.local"
      "Name"                                                                                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"             = "master-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                    = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"        = ""
      "k8s.io/role/master"                                                                  = "1"
      "kops.k8s.io/instancegroup"                                                           = "master-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                         = "owned"
    }
  }
  tag_specifications {
    resource_type = "volume"
    tags = {
      "KubernetesCluster"                                                                   = "eclipse-che.k8s.local"
      "Name"                                                                                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"             = "master-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                    = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"        = ""
      "k8s.io/role/master"                                                                  = "1"
      "kops.k8s.io/instancegroup"                                                           = "master-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                         = "owned"
    }
  }
  tags = {
    "KubernetesCluster"                                                                   = "eclipse-che.k8s.local"
    "Name"                                                                                = "master-ap-northeast-1a.masters.eclipse-che.k8s.local"
    "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"             = "master-ap-northeast-1a"
    "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                    = "master"
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane" = ""
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"        = ""
    "k8s.io/role/master"                                                                  = "1"
    "kops.k8s.io/instancegroup"                                                           = "master-ap-northeast-1a"
    "kubernetes.io/cluster/eclipse-che.k8s.local"                                         = "owned"
  }
  user_data = filebase64("${path.module}/data/aws_launch_template_master-ap-northeast-1a.masters.eclipse-che.k8s.local_user_data")
}

resource "aws_launch_template" "nodes-ap-northeast-1a-eclipse-che-k8s-local" {
  block_device_mappings {
    device_name = "/dev/sda1"
    ebs {
      delete_on_termination = true
      encrypted             = true
      iops                  = 3000
      throughput            = 125
      volume_size           = 128
      volume_type           = "gp3"
    }
  }
  iam_instance_profile {
    name = aws_iam_instance_profile.nodes-eclipse-che-k8s-local.id
  }
  image_id      = "ami-08b73b6a3bb95a35a"
  instance_type = "t3.medium"
  key_name      = aws_key_pair.kubernetes-eclipse-che-k8s-local-45ccad7fdfb05b1a495b8317891b89f9.id
  lifecycle {
    create_before_destroy = true
  }
  metadata_options {
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
    http_tokens                 = "optional"
  }
  name = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
  network_interfaces {
    associate_public_ip_address = true
    delete_on_termination       = true
    security_groups             = [aws_security_group.nodes-eclipse-che-k8s-local.id]
  }
  tag_specifications {
    resource_type = "instance"
    tags = {
      "KubernetesCluster"                                                          = "eclipse-che.k8s.local"
      "Name"                                                                       = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"    = "nodes-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"           = "node"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node" = ""
      "k8s.io/role/node"                                                           = "1"
      "kops.k8s.io/instancegroup"                                                  = "nodes-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                = "owned"
    }
  }
  tag_specifications {
    resource_type = "volume"
    tags = {
      "KubernetesCluster"                                                          = "eclipse-che.k8s.local"
      "Name"                                                                       = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"    = "nodes-ap-northeast-1a"
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"           = "node"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node" = ""
      "k8s.io/role/node"                                                           = "1"
      "kops.k8s.io/instancegroup"                                                  = "nodes-ap-northeast-1a"
      "kubernetes.io/cluster/eclipse-che.k8s.local"                                = "owned"
    }
  }
  tags = {
    "KubernetesCluster"                                                          = "eclipse-che.k8s.local"
    "Name"                                                                       = "nodes-ap-northeast-1a.eclipse-che.k8s.local"
    "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"    = "nodes-ap-northeast-1a"
    "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"           = "node"
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node" = ""
    "k8s.io/role/node"                                                           = "1"
    "kops.k8s.io/instancegroup"                                                  = "nodes-ap-northeast-1a"
    "kubernetes.io/cluster/eclipse-che.k8s.local"                                = "owned"
  }
  user_data = filebase64("${path.module}/data/aws_launch_template_nodes-ap-northeast-1a.eclipse-che.k8s.local_user_data")
}

resource "aws_route_table_association" "ap-northeast-1a-eclipse-che-k8s-local" {
  route_table_id = aws_route_table.eclipse-che-k8s-local.id
  subnet_id      = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

resource "aws_route_table" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_route" "route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.eclipse-che-k8s-local.id
  route_table_id         = aws_route_table.eclipse-che-k8s-local.id
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-masters-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-nodes-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-443to443-api-elb-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 443
  type              = "ingress"
}

resource "aws_security_group_rule" "from-api-elb-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-masters-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-1to2379-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 2379
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-2382to4000-masters-eclipse-che-k8s-local" {
  from_port                = 2382
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 4000
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-4003to65535-masters-eclipse-che-k8s-local" {
  from_port                = 4003
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-udp-1to65535-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "udp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "https-elb-to-master" {
  from_port                = 443
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port                  = 443
  type                     = "ingress"
}

resource "aws_security_group_rule" "icmp-pmtu-api-elb-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 3
  protocol          = "icmp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 4
  type              = "ingress"
}

resource "aws_security_group" "api-elb-eclipse-che-k8s-local" {
  description = "Security group for api ELB"
  name        = "api-elb.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api-elb.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "masters-eclipse-che-k8s-local" {
  description = "Security group for masters"
  name        = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "nodes-eclipse-che-k8s-local" {
  description = "Security group for nodes"
  name        = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_subnet" "ap-northeast-1a-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "172.20.32.0/19"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "ap-northeast-1a.eclipse-che.k8s.local"
    "SubnetType"                                  = "Public"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options_association" "eclipse-che-k8s-local" {
  dhcp_options_id = aws_vpc_dhcp_options.eclipse-che-k8s-local.id
  vpc_id          = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options" "eclipse-che-k8s-local" {
  domain_name         = "ap-northeast-1.compute.internal"
  domain_name_servers = ["AmazonProvidedDNS"]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_vpc" "eclipse-che-k8s-local" {
  cidr_block           = "172.20.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

terraform {
  required_version = ">= 0.12.26"
  required_providers {
    aws = {
      "source"  = "hashicorp/aws"
      "version" = ">= 2.46.0"
    }
  }
}

ここからクラスタ関連の定義を削除。

kubernetes.tf

locals {
  cluster_name                 = "eclipse-che.k8s.local"
  master_security_group_ids    = [aws_security_group.masters-eclipse-che-k8s-local.id]
  masters_role_arn             = aws_iam_role.masters-eclipse-che-k8s-local.arn
  masters_role_name            = aws_iam_role.masters-eclipse-che-k8s-local.name
  node_security_group_ids      = [aws_security_group.nodes-eclipse-che-k8s-local.id]
  node_subnet_ids              = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  nodes_role_arn               = aws_iam_role.nodes-eclipse-che-k8s-local.arn
  nodes_role_name              = aws_iam_role.nodes-eclipse-che-k8s-local.name
  region                       = "ap-northeast-1"
  route_table_public_id        = aws_route_table.eclipse-che-k8s-local.id
  subnet_ap-northeast-1a_id    = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
  vpc_cidr_block               = aws_vpc.eclipse-che-k8s-local.cidr_block
  vpc_id                       = aws_vpc.eclipse-che-k8s-local.id
}

output "cluster_name" {
  value = "eclipse-che.k8s.local"
}

output "master_security_group_ids" {
  value = [aws_security_group.masters-eclipse-che-k8s-local.id]
}

output "masters_role_arn" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.arn
}

output "masters_role_name" {
  value = aws_iam_role.masters-eclipse-che-k8s-local.name
}

output "node_security_group_ids" {
  value = [aws_security_group.nodes-eclipse-che-k8s-local.id]
}

output "node_subnet_ids" {
  value = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
}

output "nodes_role_arn" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.arn
}

output "nodes_role_name" {
  value = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

output "region" {
  value = "ap-northeast-1"
}

output "route_table_public_id" {
  value = aws_route_table.eclipse-che-k8s-local.id
}

output "subnet_ap-northeast-1a_id" {
  value = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

output "vpc_cidr_block" {
  value = aws_vpc.eclipse-che-k8s-local.cidr_block
}

output "vpc_id" {
  value = aws_vpc.eclipse-che-k8s-local.id
}

provider "aws" {
  region = "ap-northeast-1"
}

resource "aws_elb" "api-eclipse-che-k8s-local" {
  cross_zone_load_balancing = false
  health_check {
    healthy_threshold   = 2
    interval            = 10
    target              = "SSL:443"
    timeout             = 5
    unhealthy_threshold = 2
  }
  idle_timeout = 300
  listener {
    instance_port     = 443
    instance_protocol = "TCP"
    lb_port           = 443
    lb_protocol       = "TCP"
  }
  name            = "api-eclipse-che-k8s-local-hqrpnr"
  security_groups = [aws_security_group.api-elb-eclipse-che-k8s-local.id]
  subnets         = [aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_instance_profile" "masters-eclipse-che-k8s-local" {
  name = "masters.eclipse-che.k8s.local"
  role = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_instance_profile" "nodes-eclipse-che-k8s-local" {
  name = "nodes.eclipse-che.k8s.local"
  role = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "masters-eclipse-che-k8s-local" {
  name   = "masters.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_masters.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.masters-eclipse-che-k8s-local.name
}

resource "aws_iam_role_policy" "nodes-eclipse-che-k8s-local" {
  name   = "nodes.eclipse-che.k8s.local"
  policy = file("${path.module}/data/aws_iam_role_policy_nodes.eclipse-che.k8s.local_policy")
  role   = aws_iam_role.nodes-eclipse-che-k8s-local.name
}

resource "aws_iam_role" "masters-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_masters.eclipse-che.k8s.local_policy")
  name               = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_iam_role" "nodes-eclipse-che-k8s-local" {
  assume_role_policy = file("${path.module}/data/aws_iam_role_nodes.eclipse-che.k8s.local_policy")
  name               = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_internet_gateway" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_route_table_association" "ap-northeast-1a-eclipse-che-k8s-local" {
  route_table_id = aws_route_table.eclipse-che-k8s-local.id
  subnet_id      = aws_subnet.ap-northeast-1a-eclipse-che-k8s-local.id
}

resource "aws_route_table" "eclipse-che-k8s-local" {
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_route" "route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.eclipse-che-k8s-local.id
  route_table_id         = aws_route_table.eclipse-che-k8s-local.id
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-masters-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-22to22-nodes-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 22
  protocol          = "tcp"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 22
  type              = "ingress"
}

resource "aws_security_group_rule" "from-0-0-0-0--0-ingress-tcp-443to443-api-elb-eclipse-che-k8s-local" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 443
  protocol          = "tcp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 443
  type              = "ingress"
}

resource "aws_security_group_rule" "from-api-elb-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-masters-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-masters-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.masters-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-egress-all-0to0-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port           = 0
  type              = "egress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-all-0to0-nodes-eclipse-che-k8s-local" {
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.nodes-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 0
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-1to2379-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 2379
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-2382to4000-masters-eclipse-che-k8s-local" {
  from_port                = 2382
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 4000
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-tcp-4003to65535-masters-eclipse-che-k8s-local" {
  from_port                = 4003
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "from-nodes-eclipse-che-k8s-local-ingress-udp-1to65535-masters-eclipse-che-k8s-local" {
  from_port                = 1
  protocol                 = "udp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.nodes-eclipse-che-k8s-local.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "https-elb-to-master" {
  from_port                = 443
  protocol                 = "tcp"
  security_group_id        = aws_security_group.masters-eclipse-che-k8s-local.id
  source_security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port                  = 443
  type                     = "ingress"
}

resource "aws_security_group_rule" "icmp-pmtu-api-elb-0-0-0-0--0" {
  cidr_blocks       = ["0.0.0.0/0"]
  from_port         = 3
  protocol          = "icmp"
  security_group_id = aws_security_group.api-elb-eclipse-che-k8s-local.id
  to_port           = 4
  type              = "ingress"
}

resource "aws_security_group" "api-elb-eclipse-che-k8s-local" {
  description = "Security group for api ELB"
  name        = "api-elb.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "api-elb.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "masters-eclipse-che-k8s-local" {
  description = "Security group for masters"
  name        = "masters.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "masters.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_security_group" "nodes-eclipse-che-k8s-local" {
  description = "Security group for nodes"
  name        = "nodes.eclipse-che.k8s.local"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "nodes.eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_subnet" "ap-northeast-1a-eclipse-che-k8s-local" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "172.20.32.0/19"
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "ap-northeast-1a.eclipse-che.k8s.local"
    "SubnetType"                                  = "Public"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options_association" "eclipse-che-k8s-local" {
  dhcp_options_id = aws_vpc_dhcp_options.eclipse-che-k8s-local.id
  vpc_id          = aws_vpc.eclipse-che-k8s-local.id
}

resource "aws_vpc_dhcp_options" "eclipse-che-k8s-local" {
  domain_name         = "ap-northeast-1.compute.internal"
  domain_name_servers = ["AmazonProvidedDNS"]
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

resource "aws_vpc" "eclipse-che-k8s-local" {
  cidr_block           = "172.20.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    "KubernetesCluster"                           = "eclipse-che.k8s.local"
    "Name"                                        = "eclipse-che.k8s.local"
    "kubernetes.io/cluster/eclipse-che.k8s.local" = "owned"
  }
}

terraform {
  required_version = ">= 0.12.26"
  required_providers {
    aws = {
      "source"  = "hashicorp/aws"
      "version" = ">= 2.46.0"
    }
  }
}

Terraform によるインフラデプロイ

Docker コンテナ起動

docker run -it --rm -v "$(pwd):/work" --workdir "/work" --entrypoint sh hashicorp/terraform:0.15.3

デプロイ

export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
terraform init
terraform plan
terraform apply

kOps による k8s クラスタデプロイ

${KOPS_STATE_STORE} を空にし、 Terraform で作成したサブネットにアサインしつつ再作成。

VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag-value,Values=${KOPS_CLUSTER_NAME} --query "Vpcs[0].VpcId" --output text)
K8S_SUBNET_ID=$(aws ec2 describe-subnets --filters Name=tag-value,Values=${KOPS_CLUSTER_NAME} --query "Subnets[0].SubnetId" --output text)

kops create cluster --name ${KOPS_CLUSTER_NAME} \
  --dry-run=true \
  --zones="ap-northeast-1a" \
  --subnets=${K8S_SUBNET} \
  --output=yaml > manifest.yaml

manifest.yaml を更新。

...(snip)
  subnets:
  - cidr: 172.20.32.0/19
    name: ap-northeast-1a
    type: Public
    zone: ap-northeast-1a
    id: subnet-057982f9b0a3ca02a # ${K8S_SUBNET_ID} を設定
...(snip)
kops create -f manifest.yaml
kops create secret --name ${KOPS_CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub
kops update cluster --yes
kops export kubecfg --admin
kops validate cluster --wait 10m

Ingress のインストール

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.0/deploy/static/provider/aws/deploy.yaml
CHE_HOST_NAME="$(kubectl get services --namespace ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[0].hostname}')"
DOMAIN_IP="$(dig +short ${CHE_HOST_NAME})"
DOMAIN="$(echo ${DOMAIN_IP} | sed -e "s/\./-/g").nip.io"

Eclipse Che のインストール

git clone --depth 1 https://github.com/eclipse/che
cd che/deploy/kubernetes/helm/che/
helm dependency update
kubectl create namespace che
helm upgrade --install che --namespace che --set global.ingressDomain=${DOMAIN} ./

http://che-che.${DOMAIN} へアクセス。動いた。

停止。

$ kops get
Cluster
NAME                    CLOUD   ZONES
eclipse-che.k8s.local   aws     ap-northeast-1a

Instance Groups
NAME                    ROLE    MACHINETYPE     MIN     MAX     ZONES
master-ap-northeast-1a  Master  t3.medium       0       0       ap-northeast-1a
nodes-ap-northeast-1a   Node    t3.medium       0       0       ap-northeast-1a

$ kops edit ig master-ap-northeast-1a
...(mazSize, minSize を 0 に更新)

$ kops edit ig nodes-ap-northeast-1a
...(mazSize, minSize を 0 に更新)

$ kops update cluster --yes

コンソールでインスタンスがなくなることを確認。 OK.

ファイアウォールを追加して、目論見通りいくか確認するのはまた今度とする。

参考資料

0 件のコメント:

コメントを投稿