eksctl で EKS クラスターを作成してみた

AWS
AWS
この記事は約50分で読めます。

公式ドキュメントにある eksctl を利用した EKS の開始方法を参考にして、EKS クラスターを作成します。

Amazon EKS の開始方法 – eksctl - Amazon EKS
eksctl コマンドラインツールを使用して、ノードで最初の Amazon EKS クラスターを作成する方法について説明します。

前提として、以下の CLI が必要です。

  • eksctl
  • kubectl

eksctl とはEKS 上の Kubernetes クラスターの作成・管理に利用できる公式 CLI です。

eksctl
The official CLI for Amazon EKS

上記公式サイトによると、Go で書かれており CloudFormatin を利用するようです。
eksctl のインストール方法は公式ドキュメントにも記載されています。

eksctl のインストールまたは更新 - Amazon EKS
eksctl コマンドラインツールをインストールまたは更新する方法について説明します。このツールは、Amazon EKS クラスターを作成して操作するために使用されます。

私は既にインストール済みでしたが、少し古いバージョンだったので公式手順に従って最新版にします。

$ eksctl version
0.86.0

$ brew upgrade eksctl && { brew link --overwrite eksctl; } || { brew tap weaveworks/tap; brew install weaveworks/tap/eksctl; }

$ eksctl version
0.129.0

次に、kubectl は EKS 固有ではなく、Kubernetes API サーバとの通信に利用する CLI です。
こちらも私は既にインストール済みなので、バージョンのみ確認します。

Installing or updating kubectl - Amazon EKS
Learn how to install or update the kubectl command line tool. This tool is used to work with Kubernetes.
$ kubectl version --short --client

Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.4
Kustomize Version: v4.5.7

CLI の確認が完了したので EKS クラスターを作成していきます。

EKS クラスターの作成

今回は ノードタイプが EC2(管理対象) の EKS クラスターを作成します。

$ eksctl create cluster --name test     
Error: checking AWS STS access – cannot get role ARN for current session: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 103b86a3-bdd6-4c08-aa82-12bd8c555cea, api error InvalidClientTokenId: The security token included in the request is invalid.

AWS CLI の認証情報を利用していそうなので、事前準備をしてから実行しましょう。私は MFA 認証をしてからでないと利用できないようにしているので、AWS CLI で MFA 認証を終えてからコマンドを実行し直します。

$ eksctl create cluster --name test -v 5
2023-02-16 22:56:22 [▶]  role ARN for the current session is "arn:aws:iam::xxx:user/${userName}"
2023-02-16 22:56:22 [ℹ]  eksctl version 0.129.0
2023-02-16 22:56:22 [ℹ]  using region ap-northeast-1
2023-02-16 22:56:22 [▶]  determining availability zones
2023-02-16 22:56:22 [ℹ]  setting availability zones to [ap-northeast-1c ap-northeast-1d ap-northeast-1a]
2023-02-16 22:56:22 [▶]  VPC CIDR (192.168.0.0/16) was divided into 8 subnets [192.168.0.0/19 192.168.32.0/19 192.168.64.0/19 192.168.96.0/19 192.168.128.0/19 192.168.160.0/19 192.168.192.0/19 192.168.224.0/19]
2023-02-16 22:56:22 [ℹ]  subnets for ap-northeast-1c - public:192.168.0.0/19 private:192.168.96.0/19
2023-02-16 22:56:22 [ℹ]  subnets for ap-northeast-1d - public:192.168.32.0/19 private:192.168.128.0/19
2023-02-16 22:56:22 [ℹ]  subnets for ap-northeast-1a - public:192.168.64.0/19 private:192.168.160.0/19
2023-02-16 22:56:22 [ℹ]  nodegroup "ng-9f08caa6" will use "" [AmazonLinux2/1.24]
2023-02-16 22:56:22 [ℹ]  using Kubernetes version 1.24
2023-02-16 22:56:22 [ℹ]  creating EKS cluster "test" in "ap-northeast-1" region with managed nodes
2023-02-16 22:56:22 [▶]  cfg.json = \
{
    "kind": "ClusterConfig",
    "apiVersion": "eksctl.io/v1alpha5",
    "metadata": {
        "name": "test",
        "region": "ap-northeast-1",
        "version": "1.24"
    },
    "kubernetesNetworkConfig": {
        "ipFamily": "IPv4"
    },
    "iam": {
        "withOIDC": false,
        "vpcResourceControllerPolicy": true
    },
    "vpc": {
        "cidr": "192.168.0.0/16",
        "subnets": {
            "private": {
                "ap-northeast-1a": {
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.160.0/19"
                },
                "ap-northeast-1c": {
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.96.0/19"
                },
                "ap-northeast-1d": {
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.128.0/19"
                }
            },
            "public": {
                "ap-northeast-1a": {
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.64.0/19"
                },
                "ap-northeast-1c": {
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.0.0/19"
                },
                "ap-northeast-1d": {
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.32.0/19"
                }
            }
        },
        "manageSharedNodeSecurityGroupRules": true,
        "autoAllocateIPv6": false,
        "nat": {
            "gateway": "Single"
        },
        "clusterEndpoints": {
            "privateAccess": false,
            "publicAccess": true
        }
    },
    "privateCluster": {
        "enabled": false,
        "skipEndpointCreation": false
    },
    "managedNodeGroups": [
        {
            "name": "ng-9f08caa6",
            "amiFamily": "AmazonLinux2",
            "instanceType": "m5.large",
            "desiredCapacity": 2,
            "minSize": 2,
            "maxSize": 2,
            "volumeSize": 80,
            "ssh": {
                "allow": false,
                "publicKeyPath": ""
            },
            "labels": {
                "alpha.eksctl.io/cluster-name": "test",
                "alpha.eksctl.io/nodegroup-name": "ng-9f08caa6"
            },
            "privateNetworking": false,
            "tags": {
                "alpha.eksctl.io/nodegroup-name": "ng-9f08caa6",
                "alpha.eksctl.io/nodegroup-type": "managed"
            },
            "iam": {
                "withAddonPolicies": {
                    "imageBuilder": false,
                    "autoScaler": false,
                    "externalDNS": false,
                    "certManager": false,
                    "appMesh": false,
                    "appMeshPreview": false,
                    "ebs": false,
                    "fsx": false,
                    "efs": false,
                    "awsLoadBalancerController": false,
                    "albIngress": false,
                    "xRay": false,
                    "cloudWatch": false
                }
            },
            "securityGroups": {
                "withShared": null,
                "withLocal": null
            },
            "volumeType": "gp3",
            "volumeIOPS": 3000,
            "volumeThroughput": 125,
            "disableIMDSv1": false,
            "disablePodIMDS": false,
            "instanceSelector": {},
            "releaseVersion": ""
        }
    ],
    "availabilityZones": [
        "ap-northeast-1c",
        "ap-northeast-1d",
        "ap-northeast-1a"
    ],
    "cloudWatch": {
        "clusterLogging": {}
    }
}

2023-02-16 22:56:22 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2023-02-16 22:56:22 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-1 --cluster=test'
2023-02-16 22:56:22 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test" in "ap-northeast-1"
2023-02-16 22:56:22 [ℹ]  CloudWatch logging will not be enabled for cluster "test" in "ap-northeast-1"
2023-02-16 22:56:22 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-northeast-1 --cluster=test'
2023-02-16 22:56:22 [ℹ]  
2 sequential tasks: { create cluster control plane "test", 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        create managed nodegroup "ng-9f08caa6",
    } 
}
2023-02-16 22:56:22 [▶]  started task: create cluster control plane "test"
2023-02-16 22:56:22 [ℹ]  building cluster stack "eksctl-test-cluster"
2023-02-16 22:56:23 [▶]  CreateStackInput = &cloudformation.CreateStackInput{StackName:(*string)(0xc000c0f1a0), Capabilities:[]types.Capability{"CAPABILITY_IAM"}, ClientRequestToken:(*string)(nil), DisableRollback:(*bool)(0xc00015f480), EnableTerminationProtection:(*bool)(nil), NotificationARNs:[]string(nil), OnFailure:"", Parameters:[]types.Parameter(nil), ResourceTypes:[]string(nil), RoleARN:(*string)(nil), RollbackConfiguration:(*types.RollbackConfiguration)(nil), StackPolicyBody:(*string)(nil), StackPolicyURL:(*string)(nil), Tags:[]types.Tag{types.Tag{Key:(*string)(0xc000cff0e0), Value:(*string)(0xc000cff0f0), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000cff100), Value:(*string)(0xc000cff110), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000cff120), Value:(*string)(0xc000cff130), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc00083c720), Value:(*string)(0xc00083c730), noSmithyDocumentSerde:document.NoSerde{}}}, TemplateBody:(*string)(0xc00083c740), TemplateURL:(*string)(nil), TimeoutInMinutes:(*int32)(nil), noSmithyDocumentSerde:document.NoSerde{}}
2023-02-16 22:56:23 [ℹ]  deploying stack "eksctl-test-cluster"
2023-02-16 22:56:53 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 22:57:23 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 22:58:24 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 22:59:24 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:00:24 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:01:24 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:02:24 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:03:24 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:04:25 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:05:25 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:06:25 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:07:25 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:08:25 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:09:25 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster"
2023-02-16 23:09:26 [▶]  processing stack outputs
2023-02-16 23:09:26 [▶]  completed task: create cluster control plane "test"
2023-02-16 23:09:26 [▶]  started task: 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        create managed nodegroup "ng-9f08caa6",
    }

2023-02-16 23:09:26 [▶]  started task: wait for control plane to become ready
2023-02-16 23:09:26 [▶]  started task: wait for control plane to become ready
2023-02-16 23:11:27 [▶]  cluster = &types.Cluster{Arn:(*string)(0xc0005ac6f0), CertificateAuthority:(*types.Certificate)(0xc0005ac6d0), ClientRequestToken:(*string)(nil), ConnectorConfig:(*types.ConnectorConfigResponse)(nil), CreatedAt:time.Date(2023, time.February, 16, 13, 57, 16, 559000000, time.UTC), EncryptionConfig:[]types.EncryptionConfig(nil), Endpoint:(*string)(0xc0005ac660), Health:(*types.ClusterHealth)(nil), Id:(*string)(nil), Identity:(*types.Identity)(0xc0005ac630), KubernetesNetworkConfig:(*types.KubernetesNetworkConfigResponse)(0xc000a42e40), Logging:(*types.Logging)(0xc0005f0160), Name:(*string)(0xc0005ac720), OutpostConfig:(*types.OutpostConfigResponse)(nil), PlatformVersion:(*string)(0xc0005ac710), ResourcesVpcConfig:(*types.VpcConfigResponse)(0xc000bd6150), RoleArn:(*string)(0xc0005ac610), Status:"ACTIVE", Tags:map[string]string{"Name":"eksctl-test-cluster/ControlPlane", "alpha.eksctl.io/cluster-name":"test", "alpha.eksctl.io/cluster-oidc-enabled":"false", "alpha.eksctl.io/eksctl-version":"0.129.0", "aws:cloudformation:logical-id":"ControlPlane", "aws:cloudformation:stack-id":"arn:aws:cloudformation:ap-northeast-1:xxx:stack/eksctl-test-cluster/b2a63230-ae01-11ed-afcb-0e2d3d43190f", "aws:cloudformation:stack-name":"eksctl-test-cluster", "eksctl.cluster.k8s.io/v1alpha1/cluster-name":"test"}, Version:(*string)(0xc0005ac670), noSmithyDocumentSerde:document.NoSerde{}}
2023-02-16 23:11:27 [▶]  completed task: wait for control plane to become ready
2023-02-16 23:11:27 [▶]  completed task: wait for control plane to become ready
2023-02-16 23:11:27 [▶]  started task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:11:27 [▶]  waiting for 1 parallel tasks to complete
2023-02-16 23:11:27 [▶]  started task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:11:27 [▶]  waiting for 1 parallel tasks to complete
2023-02-16 23:11:27 [▶]  started task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:11:27 [▶]  started task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:11:28 [ℹ]  building managed nodegroup stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:11:28 [▶]  CreateStackInput = &cloudformation.CreateStackInput{StackName:(*string)(0xc000c0e7b0), Capabilities:[]types.Capability{"CAPABILITY_IAM"}, ClientRequestToken:(*string)(nil), DisableRollback:(*bool)(0xc000c23f20), EnableTerminationProtection:(*bool)(nil), NotificationARNs:[]string(nil), OnFailure:"", Parameters:[]types.Parameter(nil), ResourceTypes:[]string(nil), RoleARN:(*string)(nil), RollbackConfiguration:(*types.RollbackConfiguration)(nil), StackPolicyBody:(*string)(nil), StackPolicyURL:(*string)(nil), Tags:[]types.Tag{types.Tag{Key:(*string)(0xc000cff0e0), Value:(*string)(0xc000cff0f0), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000cff100), Value:(*string)(0xc000cff110), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000cff120), Value:(*string)(0xc000cff130), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc0009a2350), Value:(*string)(0xc0009a2370), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc0009a2380), Value:(*string)(0xc0009a23a0), noSmithyDocumentSerde:document.NoSerde{}}}, TemplateBody:(*string)(0xc0009a23b0), TemplateURL:(*string)(nil), TimeoutInMinutes:(*int32)(nil), noSmithyDocumentSerde:document.NoSerde{}}
2023-02-16 23:11:28 [ℹ]  deploying stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:11:28 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:11:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:12:34 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:14:20 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:15:29 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:15:29 [▶]  processing stack outputs
2023-02-16 23:15:29 [▶]  completed task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:15:29 [▶]  completed task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:15:29 [▶]  completed task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:15:29 [▶]  completed task: create managed nodegroup "ng-9f08caa6"
2023-02-16 23:15:29 [▶]  completed task: 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        create managed nodegroup "ng-9f08caa6",
    }

2023-02-16 23:15:29 [ℹ]  waiting for the control plane to become ready
2023-02-16 23:15:29 [▶]  merging kubeconfig files
2023-02-16 23:15:29 [▶]  setting current-context to ${userName}@test.ap-northeast-1.eksctl.io
2023-02-16 23:15:29 [✔]  saved kubeconfig as "/Users/xxx/.kube/config"
2023-02-16 23:15:29 [ℹ]  no tasks
2023-02-16 23:15:29 [▶]  no actual tasks
2023-02-16 23:15:29 [✔]  all EKS cluster resources for "test" have been created
2023-02-16 23:15:30 [ℹ]  nodegroup "ng-9f08caa6" has 2 node(s)
2023-02-16 23:15:30 [ℹ]  node "ip-192-168-10-124.ap-northeast-1.compute.internal" is ready
2023-02-16 23:15:30 [ℹ]  node "ip-192-168-90-206.ap-northeast-1.compute.internal" is ready
2023-02-16 23:15:30 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-9f08caa6"
2023-02-16 23:15:30 [▶]  event = watch.Event{Type:"ADDED", Object:(*v1.Node)(0xc000d19800)}
2023-02-16 23:15:30 [▶]  node "ip-192-168-90-206.ap-northeast-1.compute.internal" is ready in "ng-9f08caa6"
2023-02-16 23:15:30 [▶]  event = watch.Event{Type:"ADDED", Object:(*v1.Node)(0xc000d19b00)}
2023-02-16 23:15:30 [▶]  node "ip-192-168-10-124.ap-northeast-1.compute.internal" is ready in "ng-9f08caa6"
2023-02-16 23:15:30 [ℹ]  nodegroup "ng-9f08caa6" has 2 node(s)
2023-02-16 23:15:30 [ℹ]  node "ip-192-168-10-124.ap-northeast-1.compute.internal" is ready
2023-02-16 23:15:30 [ℹ]  node "ip-192-168-90-206.ap-northeast-1.compute.internal" is ready
2023-02-16 23:15:30 [▶]  kubectl: "/usr/local/bin/kubectl"
2023-02-16 23:15:30 [▶]  kubectl version: v1.25.4
2023-02-16 23:15:30 [▶]  found authenticator: aws-iam-authenticator
2023-02-16 23:15:30 [ℹ]  kubectl command should work with "/Users/xxx/.kube/config", try 'kubectl get nodes'
2023-02-16 23:15:30 [✔]  EKS cluster "test" in "ap-northeast-1" region is ready
2023-02-16 23:15:30 [▶]  cfg.json = \
{
    "kind": "ClusterConfig",
    "apiVersion": "eksctl.io/v1alpha5",
    "metadata": {
        "name": "test",
        "region": "ap-northeast-1",
        "version": "1.24"
    },
    "kubernetesNetworkConfig": {
        "serviceIPv4CIDR": "10.100.0.0/16"
    },
    "iam": {
        "serviceRoleARN": "arn:aws:iam::xxx:role/eksctl-test-cluster-ServiceRole-1V53A7KZCPZG9",
        "withOIDC": false,
        "vpcResourceControllerPolicy": true
    },
    "vpc": {
        "id": "vpc-0a3fbb28fd19088af",
        "cidr": "192.168.0.0/16",
        "securityGroup": "sg-041917e3c8c195a1d",
        "subnets": {
            "private": {
                "ap-northeast-1a": {
                    "id": "subnet-03242ec39967d5461",
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.160.0/19"
                },
                "ap-northeast-1c": {
                    "id": "subnet-0aa98d6171fb6550f",
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.96.0/19"
                },
                "ap-northeast-1d": {
                    "id": "subnet-0513f6f1892bf5dfa",
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.128.0/19"
                }
            },
            "public": {
                "ap-northeast-1a": {
                    "id": "subnet-0700995238c215a8b",
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.64.0/19"
                },
                "ap-northeast-1c": {
                    "id": "subnet-02e6890024b6a0e03",
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.0.0/19"
                },
                "ap-northeast-1d": {
                    "id": "subnet-08a69393606b2644d",
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.32.0/19"
                }
            }
        },
        "sharedNodeSecurityGroup": "sg-09037fda38ed8a42e",
        "manageSharedNodeSecurityGroupRules": true,
        "autoAllocateIPv6": false,
        "nat": {
            "gateway": "Single"
        },
        "clusterEndpoints": {
            "privateAccess": false,
            "publicAccess": true
        }
    },
    "privateCluster": {
        "enabled": false,
        "skipEndpointCreation": false
    },
    "managedNodeGroups": [
        {
            "name": "ng-9f08caa6",
            "amiFamily": "AmazonLinux2",
            "instanceType": "m5.large",
            "desiredCapacity": 2,
            "minSize": 2,
            "maxSize": 2,
            "volumeSize": 80,
            "ssh": {
                "allow": false,
                "publicKeyPath": ""
            },
            "labels": {
                "alpha.eksctl.io/cluster-name": "test",
                "alpha.eksctl.io/nodegroup-name": "ng-9f08caa6"
            },
            "privateNetworking": false,
            "tags": {
                "alpha.eksctl.io/nodegroup-name": "ng-9f08caa6",
                "alpha.eksctl.io/nodegroup-type": "managed"
            },
            "iam": {
                "withAddonPolicies": {
                    "imageBuilder": false,
                    "autoScaler": false,
                    "externalDNS": false,
                    "certManager": false,
                    "appMesh": false,
                    "appMeshPreview": false,
                    "ebs": false,
                    "fsx": false,
                    "efs": false,
                    "awsLoadBalancerController": false,
                    "albIngress": false,
                    "xRay": false,
                    "cloudWatch": false
                }
            },
            "securityGroups": {
                "withShared": null,
                "withLocal": null
            },
            "volumeType": "gp3",
            "volumeIOPS": 3000,
            "volumeThroughput": 125,
            "disableIMDSv1": false,
            "disablePodIMDS": false,
            "instanceSelector": {},
            "releaseVersion": ""
        }
    ],
    "availabilityZones": [
        "ap-northeast-1c",
        "ap-northeast-1d",
        "ap-northeast-1a"
    ],
    "cloudWatch": {
        "clusterLogging": {}
    }
}

-v フラグをつけて実行しているので、詳細な情報含めて出力されています。

 $ eksctl help

-v, --verbose int set log level, use 0 to silence, 4 for debugging and 5 for debugging with AWS debug logging (default 3)
-v, --verbose int ログ レベルを設定します。無音にする場合は 0、デバッグの場合は 4、AWS デバッグ ログを使用したデバッグの場合は 5 を使用します (デフォルトは 3)。

料金面で気になった点としては、デフォルトでは m5.large のインスタンスが 2 つ作成されるマネージドノードグループとなっている点です。

そのほか機能面では、Kubernetes API エンドポイントへのアクセスがパブリックとなっており接続元 IP の制限がないこと、ログ出力が無効であることです。

2 つの CloudFormation スタックが作成されています。
順番的には VPC 含めた EKS クラスターが作成され、その後ノードグループが作成されています。VPC は既存の VPC を設定することもできます。

  • eksctl-test-cluster
    • 説明:EKS cluster (dedicated VPC: true, dedicated IAM: true) [created and managed by eksctl]
    • タグ
      • alpha.eksctl.io/cluster-name: test
      • alpha.eksctl.io/cluster-oidc-enabled: false
      • alpha.eksctl.io/eksctl-version: 0.129.0
      • eksctl.cluster.k8s.io/v1alpha1/cluster-name: test
    • リソース
      • VPC
        • サブネット x 6
        • セキュリティグループ x2
        • ルートテーブル x 4
        • Internet Gateway
        • NAT Gateway
      • EKS クラスター
        • IAM ロール
  • eksctl-test-nodegroup-ng-9f08caa6
    • 説明:EKS Managed Nodes (SSH access: false) [created by eksctl]
    • タグ
      • alpha.eksctl.io/cluster-name: test
      • alpha.eksctl.io/eksctl-version: 0.129.0
      • alpha.eksctl.io/nodegroup-name: ng-9f08caa6
      • alpha.eksctl.io/nodegroup-type: managed
      • eksctl.cluster.k8s.io/v1alpha1/cluster-name: test
    • リソース
      • 起動テンプレート
      • ノードグループ
      • IAM ロール

また、~/.kube/config のファイルが更新されています。
kubectl は kubeconfig ファイルの設定情報を使用してクラスターの API サーバと通信するため、EKS クラスターを作成するとともに kubectl を利用して EKS クラスターを管理するための準備をしてくれています。

Kubernetes リソースの表示

クラスターノードを表示します。

$ kubectl get nodes -o wide
NAME                                                STATUS   ROLES    AGE   VERSION               INTERNAL-IP      EXTERNAL-IP     OS-IMAGE         KERNEL-VERSION                 CONTAINER-RUNTIME
ip-192-168-10-124.ap-northeast-1.compute.internal   Ready    <none>   35m   v1.24.9-eks-49d8fe8   192.168.10.124   35.78.198.116   Amazon Linux 2   5.4.228-131.415.amzn2.x86_64   containerd://1.6.6
ip-192-168-90-206.ap-northeast-1.compute.internal   Ready    <none>   35m   v1.24.9-eks-49d8fe8   192.168.90.206   43.207.2.116    Amazon Linux 2   5.4.228-131.415.amzn2.x86_64   containerd://1.6.6

ポッドを表示します。

$ kubectl get pods -A -o wide
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE   IP               NODE                                                NOMINATED NODE   READINESS GATES
kube-system   aws-node-c8x6z             1/1     Running   0          39m   192.168.90.206   ip-192-168-90-206.ap-northeast-1.compute.internal   <none>           <none>
kube-system   aws-node-jdxg6             1/1     Running   0          39m   192.168.10.124   ip-192-168-10-124.ap-northeast-1.compute.internal   <none>           <none>
kube-system   coredns-5fc8d4cdcf-dg2q6   1/1     Running   0          48m   192.168.95.58    ip-192-168-90-206.ap-northeast-1.compute.internal   <none>           <none>
kube-system   coredns-5fc8d4cdcf-qzkxx   1/1     Running   0          48m   192.168.80.209   ip-192-168-90-206.ap-northeast-1.compute.internal   <none>           <none>
kube-system   kube-proxy-b695h           1/1     Running   0          39m   192.168.90.206   ip-192-168-90-206.ap-northeast-1.compute.internal   <none>           <none>
kube-system   kube-proxy-stq68           1/1     Running   0          39m   192.168.10.124   ip-192-168-10-124.ap-northeast-1.compute.internal   <none>           <none>

クラスターとノードを削除する

$ eksctl delete cluster --name test
2023-02-16 23:54:13 [ℹ]  deleting EKS cluster "test"
2023-02-16 23:54:14 [ℹ]  will drain 0 unmanaged nodegroup(s) in cluster "test"
2023-02-16 23:54:14 [ℹ]  starting parallel draining, max in-flight of 1
2023-02-16 23:54:14 [ℹ]  deleted 0 Fargate profile(s)
2023-02-16 23:54:15 [✔]  kubeconfig has been updated
2023-02-16 23:54:15 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2023-02-16 23:54:16 [ℹ]  
2 sequential tasks: { delete nodegroup "ng-9f08caa6", delete cluster control plane "test" [async] 
}
2023-02-16 23:54:16 [ℹ]  will delete stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:54:16 [ℹ]  waiting for stack "eksctl-test-nodegroup-ng-9f08caa6" to get deleted
2023-02-16 23:54:16 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:54:47 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:55:31 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:56:28 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:57:51 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:58:38 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-16 23:59:52 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-17 00:01:26 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-17 00:02:49 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-17 00:03:56 [ℹ]  waiting for CloudFormation stack "eksctl-test-nodegroup-ng-9f08caa6"
2023-02-17 00:03:57 [ℹ]  will delete stack "eksctl-test-cluster"
2023-02-17 00:03:57 [✔]  all cluster resources were deleted

削除した EKS クラスターの情報も、 kubeconfig から削除されていました。

タイトルとURLをコピーしました