2021年5月29日土曜日

kOps と Terraform で AWS 上に Kubernetes クラスタを作成して運用する

kOps と Terraform を組み合わせて、Simple single zone architecture with an internet gateway - AWS Network Firewall の「Example route tables in the single zone architecture with the firewall」構成で、 Kubernetes クラスタをデプロイする。

基本的な流れは Deploying Kubernetes clusters with kops and Terraform | by Alberto Alvarez | Bench Engineering | Medium と同じ。

前提

  • OS: Windows 10 Pro
  • Docker: Docker version 20.10.6, build 370c289

AWS の事前準備

管理用 IAM ユーザー作成

ユーザーグループ追加

  • ユーザーグループ名: webide-admin
  • アクセス許可ポリシー
    • AmazonEC2FullAccess
    • AmazonS3FullAccess
    • AmazonVPCFullAccess
    • IAMFullAccess
    • CloudWatchFullAccess
    • インラインポリシーで Network Firewall にフルアクセスできるように設定
  • Terraform のバックエンド用 S3 バケット作成済み
    • バケット名: mikoto2000-terraform-state
    • リージョン: ap-northeast-1
  • kOps の state store 用 S3 バケット作成済み
    • バケット名: eclipse-che.k8s.local-kops-state-store
    • リージョン: ap-northeast-1
  • Elastic IP 取得済み
    • EIP ID: eipalloc-0562e26282f012ea8

ユーザー追加

  • ユーザー名: webide-admin
  • アクセスの種類: プログラムによるアクセス
  • アクセス許可の設定: webide-admin グループに追加

アクセスキー ID とシークレットアクセスキーをメモ。

作業環境立ち上げ

docker run -it --rm -v "$(pwd):/work" --workdir "/work" mikoto2000/eks-tools

必要な情報整理

export AWS_ACCESS_KEY_ID=AKIAW4LRTYWPDXL5DCOF
export AWS_SECRET_ACCESS_KEY=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

PREFIX=eclipse-che
export KOPS_CLUSTER_NAME=${PREFIX}.k8s.local
export KOPS_STATE_STORE=s3://${KOPS_CLUSTER_NAME}-kops-state-store

export DOMAIN="35-74-43-49.nip.io"
export TF_VAR_cluster_name=$KOPS_CLUSTER_NAME
export TF_VAR_domain="35-74-43-49.nip.io"
WORK_DIR=/work

ssh-keygen

Terraform で VPC, Firewall を構築

以下の tf ファイルを作成し、 Terraform でデプロイ。

${WORK_DIR}/terraform/main.tf

# 以下 URL の
# 「Example route tables in the single zone architecture with the firewall」
# の構成をデプロイする
#
# https://docs.aws.amazon.com/network-firewall/latest/developerguide/arch-single-zone-igw.html

# variables
variable "domain" {}
variable cluster_name {}

# local values
locals {
  cluster_tfname               = replace("${var.cluster_name}", ".", "-")
  node_subnet_ids              = [aws_subnet.ap-northeast-1a-k8s-cluster.id]
  region                       = "ap-northeast-1"
  route_table_public_id        = aws_route_table.k8s-cluster.id
  subnet_ap-northeast-1a_id    = aws_subnet.ap-northeast-1a-k8s-cluster.id
  subnet_ap-northeast-1a-fw_id = aws_subnet.ap-northeast-1a-k8s-cluster-fw.id
  vpc_cidr_block               = aws_vpc.k8s-cluster.cidr_block
  vpc_id                       = aws_vpc.k8s-cluster.id
  fwrg_k8s_aws_arn             = aws_networkfirewall_rule_group.k8s-aws.arn
  fwrg_k8s_aws_id              = aws_networkfirewall_rule_group.k8s-aws.id
}

output "cluster_name" {
  value = "${var.cluster_name}"
}

output "cluster_tfname" {
  value = local.cluster_tfname
}

output "domain" {
  value = "${var.domain}"
}

output "node_subnet_ids" {
  value = [aws_subnet.ap-northeast-1a-k8s-cluster.id]
}

output "region" {
  value = "ap-northeast-1"
}

output "route_table_public_id" {
  value = aws_route_table.k8s-cluster.id
}

output "subnet_ap-northeast-1a_id" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.id
}

output "subnet_ap-northeast-1a_cidr_block" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.cidr_block
}

output "subnet_ap-northeast-1a-fw_id" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.id
}

output "subnet_ap-northeast-1a-fw_cidr_block" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster.cidr_block
}

output "vpc_cidr_block" {
  value = aws_vpc.k8s-cluster.cidr_block
}

output "vpc_id" {
  value = aws_vpc.k8s-cluster.id
}

output "subnets" {
  value = aws_subnet.ap-northeast-1a-k8s-cluster-fw.*.id
}

output "subnet_count" {
  value = length(aws_subnet.ap-northeast-1a-k8s-cluster-fw.*.id)
}

output "fwrg_k8s_aws_arn" {
  value = aws_networkfirewall_rule_group.k8s-aws.arn
}

output "fwrg_k8s_aws_id" {
  value = aws_networkfirewall_rule_group.k8s-aws.id
}

output "firewall_status" {
  value = tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states)
}

output "firewall_endpoint_id" {
  value = element([for ss in tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states) : ss.attachment[0].endpoint_id if ss.attachment[0].subnet_id == aws_subnet.ap-northeast-1a-k8s-cluster-fw.id], 0)
}

output "AmazonEKSClusterAutoscalerPolicy_arn" {
  value = aws_iam_policy.AmazonEKSClusterAutoscalerPolicy.arn
}

output "AWSLoadBalancerControllerIAMPolicy_arn" {
  value = aws_iam_policy.AWSLoadBalancerControllerIAMPolicy.arn
}


# VPC

resource "aws_vpc" "k8s-cluster" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    "KubernetesCluster"                           = "${var.cluster_name}"
    "Name"                                        = "${var.cluster_name}"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }
}

resource "aws_subnet" "ap-northeast-1a-k8s-cluster-fw" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "10.0.4.0/28"
  tags = {
    "Name"                                        = "ap-northeast-1a.${var.cluster_name}-fw"
    "SubnetType"                                  = "Public"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_subnet" "ap-northeast-1a-k8s-cluster" {
  availability_zone = "ap-northeast-1a"
  cidr_block        = "10.0.2.0/28"
  tags = {
    "KubernetesCluster"                           = "${var.cluster_name}"
    "Name"                                        = "ap-northeast-1a.${var.cluster_name}"
    "SubnetType"                                  = "Public"
    "kubernetes.io/role/elb"                      = "1"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_vpc_dhcp_options_association" "k8s-cluster" {
  dhcp_options_id = aws_vpc_dhcp_options.k8s-cluster.id
  vpc_id          = aws_vpc.k8s-cluster.id
}

resource "aws_vpc_dhcp_options" "k8s-cluster" {
  domain_name         = "ap-northeast-1.compute.internal"
  domain_name_servers = ["AmazonProvidedDNS"]
  tags = {
    "Name"                                        = "${var.cluster_name}"
  }
}

resource "aws_internet_gateway" "k8s-cluster" {
  tags = {
    "Name"                                        = "${var.cluster_name}"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}


# Route Table

resource "aws_route_table" "igw2fw" {
  tags = {
    "Name"                                        = "${var.cluster_name}-igw2fw"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_route" "igw2fw-route-0-0-0-0--0" {
  vpc_endpoint_id        = element([for ss in tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states) : ss.attachment[0].endpoint_id if ss.attachment[0].subnet_id == aws_subnet.ap-northeast-1a-k8s-cluster-fw.id], 0)
  destination_cidr_block = "10.0.2.0/28"
  route_table_id         = aws_route_table.igw2fw.id
}

resource "aws_route_table_association" "igw2fw-association" {
  gateway_id = aws_internet_gateway.k8s-cluster.id
  route_table_id = aws_route_table.igw2fw.id
}

resource "aws_route_table" "fw2igw" {
  tags = {
    "Name"                                        = "${var.cluster_name}-fw2igw"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}

resource "aws_route" "fw2igw-route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  gateway_id = aws_internet_gateway.k8s-cluster.id
  route_table_id         = aws_route_table.fw2igw.id
}

resource "aws_route_table_association" "fw2igw-association" {
  subnet_id = aws_subnet.ap-northeast-1a-k8s-cluster-fw.id
  route_table_id = aws_route_table.fw2igw.id
}

resource "aws_route_table_association" "ap-northeast-1a-k8s-cluster" {
  route_table_id = aws_route_table.k8s-cluster.id
  subnet_id      = aws_subnet.ap-northeast-1a-k8s-cluster.id
}

resource "aws_route" "route-0-0-0-0--0" {
  destination_cidr_block = "0.0.0.0/0"
  vpc_endpoint_id        = element([for ss in tolist(aws_networkfirewall_firewall.k8s-cluster-fw.firewall_status[0].sync_states) : ss.attachment[0].endpoint_id if ss.attachment[0].subnet_id == aws_subnet.ap-northeast-1a-k8s-cluster-fw.id], 0)
  route_table_id         = aws_route_table.k8s-cluster.id
}

resource "aws_route_table" "k8s-cluster" {
  tags = {
    "KubernetesCluster"                           = "${var.cluster_name}"
    "Name"                                        = "${var.cluster_name}"
    "kubernetes.io/kops/role"                     = "public"
  }
  vpc_id = aws_vpc.k8s-cluster.id
}


# Firewall
resource "aws_networkfirewall_rule_group" "k8s-aws" {
  capacity = 30
  name     = "k8s-aws"
  type     = "STATEFUL"
  rule_group {
    rules_source {
      rules_source_list {
        generated_rules_type = "ALLOWLIST"
        target_types         = ["HTTP_HOST", "TLS_SNI"]
        targets              = [
                                 "motd.ubuntu.com",
                                 "entropy.ubuntu.com",
                                 "sts.amazonaws.com",
                                 ".ap-northeast-1.amazonaws.com",
                                 "security.ubuntu.com",
                                 "ap-northeast-1.ec2.archive.ubuntu.com",
                                 "artifacts.k8s.io",
                                 "api.snapcraft.io",
                                 "api.ecr.us-west-2.amazonaws.com",
                                 "602401143452.dkr.ecr.us-west-2.amazonaws.com",
                                 "prod-us-west-2-starport-layer-bucket.s3.us-west-2.amazonaws.com",
                                 "k8s.gcr.io",
                                 "pricing.us-east-1.amazonaws.com"
                               ]
      }
    }
  }

  tags = {}
}

resource "aws_networkfirewall_rule_group" "che" {
  capacity = 30
  name     = "che"
  type     = "STATEFUL"
  rule_group {
    rules_source {
      rules_source_list {
        generated_rules_type = "ALLOWLIST"
        target_types         = ["HTTP_HOST", "TLS_SNI"]
        targets              = [
                                 ".${var.domain}",
                                 "registry.access.redhat.com",
                                 "production.cloudflare.docker.com",
                                 "storage.googleapis.com",
                                 "grafana.github.io",
                                 "prometheus-community.github.com",
                                 "github.com",
                                 "github-releases.githubusercontent.com",
                                 ".docker.io",
                                 ".quay.io",
                                 "quayio-production-s3.s3.amazonaws.com"
                               ]
      }
    }
  }

  tags = {}
}

resource "aws_networkfirewall_firewall_policy" "k8s-cluster-fwp" {
  name = "${local.cluster_tfname}-fwp"

  firewall_policy {
    stateless_default_actions          = ["aws:forward_to_sfe"]
    stateless_fragment_default_actions = ["aws:forward_to_sfe"]
    stateful_rule_group_reference {
      resource_arn = aws_networkfirewall_rule_group.k8s-aws.arn
    }
    stateful_rule_group_reference {
      resource_arn = aws_networkfirewall_rule_group.che.arn
    }
  }

  tags = {}
}

resource "aws_networkfirewall_firewall" "k8s-cluster-fw" {
  name                = "${local.cluster_tfname}-fw"
  firewall_policy_arn = aws_networkfirewall_firewall_policy.k8s-cluster-fwp.arn
  vpc_id              = aws_vpc.k8s-cluster.id
  subnet_mapping {
    subnet_id = aws_subnet.ap-northeast-1a-k8s-cluster-fw.id
  }

  tags = {}
}

resource "aws_networkfirewall_logging_configuration" "k8s-cluster-fw-logging-configuration" {
  firewall_arn = aws_networkfirewall_firewall.k8s-cluster-fw.arn
  logging_configuration {
    log_destination_config {
      log_destination = {
        logGroup = aws_cloudwatch_log_group.k8s-cluster-fw-log.name
      }
      log_destination_type = "CloudWatchLogs"
      log_type             = "ALERT"
    }
    log_destination_config {
      log_destination = {
        logGroup = aws_cloudwatch_log_group.k8s-cluster-fw-log.name
      }
      log_destination_type = "CloudWatchLogs"
      log_type             = "FLOW"
    }
  }
}

resource "aws_cloudwatch_log_group" "k8s-cluster-fw-log" {
  name = "${var.cluster_name}-fw-log"
}


# IAM


# See: [IAM Permissions - aws-load-balancer-controller/installation.md at main · kubernetes-sigs/aws-load-balancer-controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/deploy/installation.md#iam-permissions)
resource "aws_iam_policy" "AWSLoadBalancerControllerIAMPolicy" {
  name        = "AWSLoadBalancerControllerIAMPolicy"

  policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": [
          "autoscaling:DescribeAutoScalingGroups",
          "autoscaling:DescribeAutoScalingInstances",
          "autoscaling:DescribeLaunchConfigurations",
          "autoscaling:DescribeTags",
          "autoscaling:SetDesiredCapacity",
          "autoscaling:TerminateInstanceInAutoScalingGroup",
          "ec2:DescribeLaunchTemplateVersions"
        ],
        "Resource": "*",
        "Effect": "Allow"
      }
    ]
  })
}

# See: [IAM ポリシーとロールを作成する - Cluster Autoscaler - Amazon EKS](https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/cluster-autoscaler.html#ca-create-policy)
resource "aws_iam_policy" "AmazonEKSClusterAutoscalerPolicy" {
  name        = "AmazonEKSClusterAutoscalerPolicy"

  policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "iam:CreateServiceLinkedRole",
          "ec2:DescribeAccountAttributes",
          "ec2:DescribeAddresses",
          "ec2:DescribeAvailabilityZones",
          "ec2:DescribeInternetGateways",
          "ec2:DescribeVpcs",
          "ec2:DescribeSubnets",
          "ec2:DescribeSecurityGroups",
          "ec2:DescribeInstances",
          "ec2:DescribeNetworkInterfaces",
          "ec2:DescribeTags",
          "ec2:GetCoipPoolUsage",
          "ec2:DescribeCoipPools",
          "elasticloadbalancing:DescribeLoadBalancers",
          "elasticloadbalancing:DescribeLoadBalancerAttributes",
          "elasticloadbalancing:DescribeListeners",
          "elasticloadbalancing:DescribeListenerCertificates",
          "elasticloadbalancing:DescribeSSLPolicies",
          "elasticloadbalancing:DescribeRules",
          "elasticloadbalancing:DescribeTargetGroups",
          "elasticloadbalancing:DescribeTargetGroupAttributes",
          "elasticloadbalancing:DescribeTargetHealth",
          "elasticloadbalancing:DescribeTags"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "cognito-idp:DescribeUserPoolClient",
          "acm:ListCertificates",
          "acm:DescribeCertificate",
          "iam:ListServerCertificates",
          "iam:GetServerCertificate",
          "waf-regional:GetWebACL",
          "waf-regional:GetWebACLForResource",
          "waf-regional:AssociateWebACL",
          "waf-regional:DisassociateWebACL",
          "wafv2:GetWebACL",
          "wafv2:GetWebACLForResource",
          "wafv2:AssociateWebACL",
          "wafv2:DisassociateWebACL",
          "shield:GetSubscriptionState",
          "shield:DescribeProtection",
          "shield:CreateProtection",
          "shield:DeleteProtection"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:AuthorizeSecurityGroupIngress",
          "ec2:RevokeSecurityGroupIngress"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:CreateSecurityGroup"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:CreateTags"
        ],
        "Resource": "arn:aws:ec2:*:*:security-group/*",
        "Condition": {
          "StringEquals": {
            "ec2:CreateAction": "CreateSecurityGroup"
          },
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:CreateTags",
          "ec2:DeleteTags"
        ],
        "Resource": "arn:aws:ec2:*:*:security-group/*",
        "Condition": {
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "ec2:AuthorizeSecurityGroupIngress",
          "ec2:RevokeSecurityGroupIngress",
          "ec2:DeleteSecurityGroup"
        ],
        "Resource": "*",
        "Condition": {
          "Null": {
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:CreateLoadBalancer",
          "elasticloadbalancing:CreateTargetGroup"
        ],
        "Resource": "*",
        "Condition": {
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:CreateListener",
          "elasticloadbalancing:DeleteListener",
          "elasticloadbalancing:CreateRule",
          "elasticloadbalancing:DeleteRule"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:AddTags",
          "elasticloadbalancing:RemoveTags"
        ],
        "Resource": [
          "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
          "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
          "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
        ],
        "Condition": {
          "Null": {
            "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:AddTags",
          "elasticloadbalancing:RemoveTags"
        ],
        "Resource": [
          "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*",
          "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*",
          "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*",
          "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*"
        ]
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:ModifyLoadBalancerAttributes",
          "elasticloadbalancing:SetIpAddressType",
          "elasticloadbalancing:SetSecurityGroups",
          "elasticloadbalancing:SetSubnets",
          "elasticloadbalancing:DeleteLoadBalancer",
          "elasticloadbalancing:ModifyTargetGroup",
          "elasticloadbalancing:ModifyTargetGroupAttributes",
          "elasticloadbalancing:DeleteTargetGroup"
        ],
        "Resource": "*",
        "Condition": {
          "Null": {
            "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
          }
        }
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:RegisterTargets",
          "elasticloadbalancing:DeregisterTargets"
        ],
        "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "elasticloadbalancing:SetWebAcl",
          "elasticloadbalancing:ModifyListener",
          "elasticloadbalancing:AddListenerCertificates",
          "elasticloadbalancing:RemoveListenerCertificates",
          "elasticloadbalancing:ModifyRule"
        ],
        "Resource": "*"
      }
    ]
  })
}


# common
terraform {
  required_version = ">= 0.12.26"
  required_providers {
    aws = {
      "source"  = "hashicorp/aws"
      "version" = ">= 2.46.0"
    }
  }
  backend "s3" {
    bucket                  = "mikoto2000-terraform-state"
    key                     = "network.tf"
    workspace_key_prefix    = "eclipse-che.k8s.local"
    region                  = "ap-northeast-1"
  }
}

provider "aws" {
  region = "ap-northeast-1"
}
cd ${WORK_DIR}/terraform
terraform init
terraform plan
terraform apply

k8s クラスタデプロイに必要な情報を kOps に渡すために、 Terraform のデプロイ結果を json ファイルに出力。

mkdir ${WORK_DIR}/kops
terraform output -json > ${WORK_DIR}/kops/infra.json

kOps で Kubernetes クラスタをデプロイ

テンプレートを生成

以下コマンドでマニフェストを作成。

cd ${WORK_DIR}/kops

kops create cluster \
  --disable-subnet-tags \
  --vpc $(cat infra.json | jq -r .vpc_id.value) \
  --subnets  $(cat infra.json | jq -r .subnets.value[0]) \
  --zones="ap-northeast-1a" \
  --dry-run \
  -o yaml > manifest-template.yaml

このマニフェストを元に以下修正を加え、テンプレートを作る。

  • 静的 IP 割り当て用ポリシー追加
  • クラスタオートスケーラー用ポリシー追加
  • クラスタオートスケーラー用ラベル(cloudLabels)の追加
  • クラスタ名をプレースホルダへ変更
  • IP アドレス系の値をプレースホルダへ変更
  • ID 系をプレースホルダへ変更

${WORK_DIR}/kops/manifest-template.yaml

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  name: {{.cluster_name.value}}
spec:
  DisableSubnetTags: true
  api:
    loadBalancer:
      class: Classic
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://{{.cluster_name.value}}-kops-state-store/{{.cluster_name.value}}
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-ap-northeast-1a
      name: a
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-ap-northeast-1a
      name: a
    memoryRequest: 100Mi
    name: events
  externalPolicies:
    node:
    - {{.AWSLoadBalancerControllerIAMPolicy_arn.value}}
    - {{.AmazonEKSClusterAutoscalerPolicy_arn.value}}
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.20.6
  masterPublicName: api.{{.cluster_name.value}}
  networkCIDR: {{.vpc_cidr_block.value}}
  networkID: {{.vpc_id.value}}
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: {{ index . "subnet_ap-northeast-1a_cidr_block" "value" }}
    id: {{ index . "subnet_ap-northeast-1a_id" "value" }}
    name: ap-northeast-1a
    type: Public
    zone: ap-northeast-1a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: {{.cluster_name.value}}
  name: master-ap-northeast-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210415
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-ap-northeast-1a
  role: Master
  subnets:
  - ap-northeast-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: {{.cluster_name.value}}
  name: nodes-ap-northeast-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210415
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-ap-northeast-1a
  # See: [Prerequisites - Cluster Autoscaler - Amazon EKS](https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/cluster-autoscaler.html#ca-prerequisites)
  cloudLabels:
    k8s.io/cluster-autoscaler/{{.cluster_name.value}}: owned
    k8s.io/cluster-autoscaler/enabled: "TRUE"
  role: Node
  subnets:
  - ap-northeast-1a

テンプレートに Terraform の結果を流し込んでデプロイ

cd ${WORK_DIR}/kops

kops toolbox template --name ${KOPS_CLUSTER_NAME} --values ./infra.json --template ./manifest-template.yaml --format-yaml > manifest.yaml

kops create -f manifest.yaml
kops create secret --name ${KOPS_CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub
kops update cluster --yes

kops export kubecfg --admin
kops validate cluster --wait 10m

AWS Load Balancer Controller のインストール

mkdir ${WORK_DIR}/lb
cd ${WORK_DIR}/lb

git clone --depth 1 https://github.com/aws/eks-charts
kubectl apply -k ./eks-charts/stable/aws-load-balancer-controller/crds/

helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
    -n kube-system \
    --set clusterName=${KOPS_CLUSTER_NAME}

Ingress のインストール

mkdir ${WORK_DIR}/ingress
cd ${WORK_DIR}/ingress
curl -o ingress.yaml -L https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.0/deploy/static/provider/aws/deploy.yaml

ダウンロードしたマニフェストに、固定 IP 用の設定を追加する。

...(snip)
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-0562e26282f012ea8
...(snip)

修正したマニフェストを用いてデプロイ。

kubectl apply -f ./ingress.yaml

Ingress のサービスに固定 IP が割り当てられるまで待つ。

CHE_HOST_NAME="$(kubectl get services --namespace ingress-nginx -o jsonpath='{.items[].status.loadBalancer.ingress[0].hostname}')"
DOMAIN_IP=$(dig +short ${CHE_HOST_NAME})
while [ "${DOMAIN_IP}" == "" ] ; do
  echo "waiting..."
  sleep 10
  DOMAIN_IP=$(dig +short ${CHE_HOST_NAME})
done

DOMAIN="$(echo ${DOMAIN_IP} | sed -e "s/\./-/g").nip.io"

Cluster Autoscaler のインストール

mkdir ${WORK_DIR}/cluster-autoscale
cd ${WORK_DIR}/cluster-autoscale
curl -L -O https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

ダウンロードしたマニフェストから以下ブロックを削除。

          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes
              readOnly: true

apply。

kubectl apply -f ./cluster-autoscaler-autodiscover.yaml

Eclipse Che のインストール

cd ${WORK_DIR}

kubectl create namespace che

kubectl create secret generic self-signed-certificate --from-file=che_demo_ca/client/ca.crt -n che
kubectl create secret generic che-tls --from-file=ca.crt=che_demo_ca/client/ca.crt --from-file=tls.key=che_demo_ca/client/server-35-74-43-49.nip.io.key --from-file=tls.crt=che_demo_ca/client/server-35-74-43-49.nip.io.crt -n che

git clone --depth 1 https://github.com/eclipse-che/che-server
cd che-server/deploy/kubernetes/helm/che/

sed -i -e 's/serverStrategy: multi-host/serverStrategy: single-host/g' values.yaml
sed -i -e 's/  singleHostExposure: native/  singleHostExposure: gateway/g' values.yaml
sed -i -e 's/  deploy: false/  deploy: true/g' values.yaml

helm dependency update
helm upgrade --install che --namespace che --set global.ingressDomain=webide.${DOMAIN} --set global.multiuser="false" --set global.tls.enabled="false" ./

http://webide.35-74-43-49.nip.io へアクセス。 OK。

後はプラグインダウンロードに必要なドメインを、ファイアウォールに追加していけば良さそう。

運用では生成した yaml をバージョン管理すればいいのかな?

後片付け

cd ${WORK_DIR}/terraform
kops delete cluster --yes & terraform destroy -auto-approve

その他メモ

リスタート

kubectl rollout restart -n che             deployments/che
kubectl rollout restart -n che             deployments/che-dashboard
kubectl rollout restart -n che             deployments/devfile-registry
kubectl rollout restart -n che             deployments/keycloak
kubectl rollout restart -n che             deployments/plugin-registry
kubectl rollout restart -n che             deployments/postgres
kubectl rollout restart -n ingress-nginx   deployments/ingress-nginx-controller

参考資料

0 件のコメント:

コメントを投稿