kubespray 2.24 추가
This commit is contained in:
3
contrib/terraform/aws/.gitignore
vendored
Normal file
3
contrib/terraform/aws/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
*.tfstate*
|
||||
.terraform.lock.hcl
|
||||
.terraform
|
||||
124
contrib/terraform/aws/README.md
Normal file
124
contrib/terraform/aws/README.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# Kubernetes on AWS with Terraform
|
||||
|
||||
## Overview
|
||||
|
||||
This project will create:
|
||||
|
||||
- VPC with Public and Private Subnets in # Availability Zones
|
||||
- Bastion Hosts and NAT Gateways in the Public Subnet
|
||||
- A dynamic number of masters, etcd, and worker nodes in the Private Subnet
|
||||
- even distributed over the # of Availability Zones
|
||||
- AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
|
||||
|
||||
## Requirements
|
||||
|
||||
- Terraform 0.12.0 or newer
|
||||
|
||||
## How to Use
|
||||
|
||||
- Export the variables for your AWS credentials or edit `credentials.tfvars`:
|
||||
|
||||
```commandline
|
||||
export TF_VAR_AWS_ACCESS_KEY_ID="www"
|
||||
export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
|
||||
export TF_VAR_AWS_SSH_KEY_NAME="yyy"
|
||||
export TF_VAR_AWS_DEFAULT_REGION="zzz"
|
||||
```
|
||||
|
||||
- Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use Ubuntu 18.04 LTS (Bionic) as base image. If you want to change this behaviour, see note "Using other distrib than Ubuntu" below.
|
||||
- Create an AWS EC2 SSH Key
|
||||
- Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
|
||||
|
||||
Example:
|
||||
|
||||
```commandline
|
||||
terraform apply -var-file=credentials.tfvars
|
||||
```
|
||||
|
||||
- Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
|
||||
- Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated `ssh-bastion.conf`. Ansible automatically detects bastion and changes `ssh_args`
|
||||
|
||||
```commandline
|
||||
ssh -F ./ssh-bastion.conf user@$ip
|
||||
```
|
||||
|
||||
- Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
|
||||
|
||||
Example (this one assumes you are using Ubuntu)
|
||||
|
||||
```commandline
|
||||
ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
|
||||
```
|
||||
|
||||
## Using other distrib than Ubuntu***
|
||||
|
||||
To leverage a Linux distribution other than Ubuntu 18.04 (Bionic) LTS for your Terraform configurations, you can adjust the AMI search filters within the 'data "aws_ami" "distro"' block by utilizing variables in your `terraform.tfvars` file. This approach ensures a flexible configuration that adapts to various Linux distributions without directly modifying the core Terraform files.
|
||||
|
||||
### Example Usages
|
||||
|
||||
- **Debian Jessie**: To configure the usage of Debian Jessie, insert the subsequent lines into your `terraform.tfvars`:
|
||||
|
||||
```hcl
|
||||
ami_name_pattern = "debian-jessie-amd64-hvm-*"
|
||||
ami_owners = ["379101102735"]
|
||||
```
|
||||
|
||||
- **Ubuntu 16.04**: To utilize Ubuntu 16.04 instead, apply the following configuration in your `terraform.tfvars`:
|
||||
|
||||
```hcl
|
||||
ami_name_pattern = "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"
|
||||
ami_owners = ["099720109477"]
|
||||
```
|
||||
|
||||
- **Centos 7**: For employing Centos 7, incorporate these lines into your `terraform.tfvars`:
|
||||
|
||||
```hcl
|
||||
ami_name_pattern = "dcos-centos7-*"
|
||||
ami_owners = ["688023202711"]
|
||||
```
|
||||
|
||||
## Connecting to Kubernetes
|
||||
|
||||
You can use the following set of commands to get the kubeconfig file from your newly created cluster. Before running the commands, make sure you are in the project's root folder.
|
||||
|
||||
```commandline
|
||||
# Get the controller's IP address.
|
||||
CONTROLLER_HOST_NAME=$(cat ./inventory/hosts | grep "\[kube_control_plane\]" -A 1 | tail -n 1)
|
||||
CONTROLLER_IP=$(cat ./inventory/hosts | grep $CONTROLLER_HOST_NAME | grep ansible_host | cut -d'=' -f2)
|
||||
|
||||
# Get the hostname of the load balancer.
|
||||
LB_HOST=$(cat inventory/hosts | grep apiserver_loadbalancer_domain_name | cut -d'"' -f2)
|
||||
|
||||
# Get the controller's SSH fingerprint.
|
||||
ssh-keygen -R $CONTROLLER_IP > /dev/null 2>&1
|
||||
ssh-keyscan -H $CONTROLLER_IP >> ~/.ssh/known_hosts 2>/dev/null
|
||||
|
||||
# Get the kubeconfig from the controller.
|
||||
mkdir -p ~/.kube
|
||||
ssh -F ssh-bastion.conf centos@$CONTROLLER_IP "sudo chmod 644 /etc/kubernetes/admin.conf"
|
||||
scp -F ssh-bastion.conf centos@$CONTROLLER_IP:/etc/kubernetes/admin.conf ~/.kube/config
|
||||
sed -i "s^server:.*^server: https://$LB_HOST:6443^" ~/.kube/config
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Remaining AWS IAM Instance Profile
|
||||
|
||||
If the cluster was destroyed without using Terraform it is possible that
|
||||
the AWS IAM Instance Profiles still remain. To delete them you can use
|
||||
the `AWS CLI` with the following command:
|
||||
|
||||
```commandline
|
||||
aws iam delete-instance-profile --region <region_name> --instance-profile-name <profile_name>
|
||||
```
|
||||
|
||||
### Ansible Inventory doesn't get created
|
||||
|
||||
It could happen that Terraform doesn't create an Ansible Inventory file automatically. If this is the case copy the output after `inventory=` and create a file named `hosts`in the directory `inventory` and paste the inventory into the file.
|
||||
|
||||
## Architecture
|
||||
|
||||
Pictured is an AWS Infrastructure created with this Terraform project distributed over two Availability Zones.
|
||||
|
||||

|
||||
179
contrib/terraform/aws/create-infrastructure.tf
Normal file
179
contrib/terraform/aws/create-infrastructure.tf
Normal file
@@ -0,0 +1,179 @@
|
||||
terraform {
|
||||
required_version = ">= 0.12.0"
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
access_key = var.AWS_ACCESS_KEY_ID
|
||||
secret_key = var.AWS_SECRET_ACCESS_KEY
|
||||
region = var.AWS_DEFAULT_REGION
|
||||
}
|
||||
|
||||
data "aws_availability_zones" "available" {}
|
||||
|
||||
/*
|
||||
* Calling modules who create the initial AWS VPC / AWS ELB
|
||||
* and AWS IAM Roles for Kubernetes Deployment
|
||||
*/
|
||||
|
||||
module "aws-vpc" {
|
||||
source = "./modules/vpc"
|
||||
|
||||
aws_cluster_name = var.aws_cluster_name
|
||||
aws_vpc_cidr_block = var.aws_vpc_cidr_block
|
||||
aws_avail_zones = data.aws_availability_zones.available.names
|
||||
aws_cidr_subnets_private = var.aws_cidr_subnets_private
|
||||
aws_cidr_subnets_public = var.aws_cidr_subnets_public
|
||||
default_tags = var.default_tags
|
||||
}
|
||||
|
||||
module "aws-nlb" {
|
||||
source = "./modules/nlb"
|
||||
|
||||
aws_cluster_name = var.aws_cluster_name
|
||||
aws_vpc_id = module.aws-vpc.aws_vpc_id
|
||||
aws_avail_zones = data.aws_availability_zones.available.names
|
||||
aws_subnet_ids_public = module.aws-vpc.aws_subnet_ids_public
|
||||
aws_nlb_api_port = var.aws_nlb_api_port
|
||||
k8s_secure_api_port = var.k8s_secure_api_port
|
||||
default_tags = var.default_tags
|
||||
}
|
||||
|
||||
module "aws-iam" {
|
||||
source = "./modules/iam"
|
||||
|
||||
aws_cluster_name = var.aws_cluster_name
|
||||
}
|
||||
|
||||
/*
|
||||
* Create Bastion Instances in AWS
|
||||
*
|
||||
*/
|
||||
|
||||
resource "aws_instance" "bastion-server" {
|
||||
ami = data.aws_ami.distro.id
|
||||
instance_type = var.aws_bastion_size
|
||||
count = var.aws_bastion_num
|
||||
associate_public_ip_address = true
|
||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_public, count.index)
|
||||
|
||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||
|
||||
key_name = var.AWS_SSH_KEY_NAME
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-bastion-${count.index}"
|
||||
Cluster = var.aws_cluster_name
|
||||
Role = "bastion-${var.aws_cluster_name}-${count.index}"
|
||||
}))
|
||||
}
|
||||
|
||||
/*
|
||||
* Create K8s Master and worker nodes and etcd instances
|
||||
*
|
||||
*/
|
||||
|
||||
resource "aws_instance" "k8s-master" {
|
||||
ami = data.aws_ami.distro.id
|
||||
instance_type = var.aws_kube_master_size
|
||||
|
||||
count = var.aws_kube_master_num
|
||||
|
||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
||||
|
||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||
|
||||
root_block_device {
|
||||
volume_size = var.aws_kube_master_disk_size
|
||||
}
|
||||
|
||||
iam_instance_profile = module.aws-iam.kube_control_plane-profile
|
||||
key_name = var.AWS_SSH_KEY_NAME
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-master${count.index}"
|
||||
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
|
||||
Role = "master"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_lb_target_group_attachment" "tg-attach_master_nodes" {
|
||||
count = var.aws_kube_master_num
|
||||
target_group_arn = module.aws-nlb.aws_nlb_api_tg_arn
|
||||
target_id = element(aws_instance.k8s-master.*.private_ip, count.index)
|
||||
}
|
||||
|
||||
resource "aws_instance" "k8s-etcd" {
|
||||
ami = data.aws_ami.distro.id
|
||||
instance_type = var.aws_etcd_size
|
||||
|
||||
count = var.aws_etcd_num
|
||||
|
||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
||||
|
||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||
|
||||
root_block_device {
|
||||
volume_size = var.aws_etcd_disk_size
|
||||
}
|
||||
|
||||
key_name = var.AWS_SSH_KEY_NAME
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-etcd${count.index}"
|
||||
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
|
||||
Role = "etcd"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_instance" "k8s-worker" {
|
||||
ami = data.aws_ami.distro.id
|
||||
instance_type = var.aws_kube_worker_size
|
||||
|
||||
count = var.aws_kube_worker_num
|
||||
|
||||
subnet_id = element(module.aws-vpc.aws_subnet_ids_private, count.index)
|
||||
|
||||
vpc_security_group_ids = module.aws-vpc.aws_security_group
|
||||
|
||||
root_block_device {
|
||||
volume_size = var.aws_kube_worker_disk_size
|
||||
}
|
||||
|
||||
iam_instance_profile = module.aws-iam.kube-worker-profile
|
||||
key_name = var.AWS_SSH_KEY_NAME
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-worker${count.index}"
|
||||
"kubernetes.io/cluster/${var.aws_cluster_name}" = "member"
|
||||
Role = "worker"
|
||||
}))
|
||||
}
|
||||
|
||||
/*
|
||||
* Create Kubespray Inventory File
|
||||
*
|
||||
*/
|
||||
data "template_file" "inventory" {
|
||||
template = file("${path.module}/templates/inventory.tpl")
|
||||
|
||||
vars = {
|
||||
public_ip_address_bastion = join("\n", formatlist("bastion ansible_host=%s", aws_instance.bastion-server.*.public_ip))
|
||||
connection_strings_master = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-master.*.private_dns, aws_instance.k8s-master.*.private_ip))
|
||||
connection_strings_node = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-worker.*.private_dns, aws_instance.k8s-worker.*.private_ip))
|
||||
list_master = join("\n", aws_instance.k8s-master.*.private_dns)
|
||||
list_node = join("\n", aws_instance.k8s-worker.*.private_dns)
|
||||
connection_strings_etcd = join("\n", formatlist("%s ansible_host=%s", aws_instance.k8s-etcd.*.private_dns, aws_instance.k8s-etcd.*.private_ip))
|
||||
list_etcd = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_dns) : (aws_instance.k8s-master.*.private_dns)))
|
||||
nlb_api_fqdn = "apiserver_loadbalancer_domain_name=\"${module.aws-nlb.aws_nlb_api_fqdn}\""
|
||||
}
|
||||
}
|
||||
|
||||
resource "null_resource" "inventories" {
|
||||
provisioner "local-exec" {
|
||||
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
|
||||
}
|
||||
|
||||
triggers = {
|
||||
template = data.template_file.inventory.rendered
|
||||
}
|
||||
}
|
||||
8
contrib/terraform/aws/credentials.tfvars.example
Normal file
8
contrib/terraform/aws/credentials.tfvars.example
Normal file
@@ -0,0 +1,8 @@
|
||||
#AWS Access Key
|
||||
AWS_ACCESS_KEY_ID = ""
|
||||
#AWS Secret Key
|
||||
AWS_SECRET_ACCESS_KEY = ""
|
||||
#EC2 SSH Key Name
|
||||
AWS_SSH_KEY_NAME = ""
|
||||
#AWS Region
|
||||
AWS_DEFAULT_REGION = "eu-central-1"
|
||||
BIN
contrib/terraform/aws/docs/aws_kubespray.png
Normal file
BIN
contrib/terraform/aws/docs/aws_kubespray.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 114 KiB |
141
contrib/terraform/aws/modules/iam/main.tf
Normal file
141
contrib/terraform/aws/modules/iam/main.tf
Normal file
@@ -0,0 +1,141 @@
|
||||
#Add AWS Roles for Kubernetes
|
||||
|
||||
resource "aws_iam_role" "kube_control_plane" {
|
||||
name = "kubernetes-${var.aws_cluster_name}-master"
|
||||
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "ec2.amazonaws.com"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_iam_role" "kube-worker" {
|
||||
name = "kubernetes-${var.aws_cluster_name}-node"
|
||||
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "ec2.amazonaws.com"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
#Add AWS Policies for Kubernetes
|
||||
|
||||
resource "aws_iam_role_policy" "kube_control_plane" {
|
||||
name = "kubernetes-${var.aws_cluster_name}-master"
|
||||
role = aws_iam_role.kube_control_plane.id
|
||||
|
||||
policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["ec2:*"],
|
||||
"Resource": ["*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["elasticloadbalancing:*"],
|
||||
"Resource": ["*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["route53:*"],
|
||||
"Resource": ["*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "s3:*",
|
||||
"Resource": [
|
||||
"arn:aws:s3:::kubernetes-*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_iam_role_policy" "kube-worker" {
|
||||
name = "kubernetes-${var.aws_cluster_name}-node"
|
||||
role = aws_iam_role.kube-worker.id
|
||||
|
||||
policy = <<EOF
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "s3:*",
|
||||
"Resource": [
|
||||
"arn:aws:s3:::kubernetes-*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "ec2:Describe*",
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "ec2:AttachVolume",
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "ec2:DetachVolume",
|
||||
"Resource": "*"
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["route53:*"],
|
||||
"Resource": ["*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ecr:GetAuthorizationToken",
|
||||
"ecr:BatchCheckLayerAvailability",
|
||||
"ecr:GetDownloadUrlForLayer",
|
||||
"ecr:GetRepositoryPolicy",
|
||||
"ecr:DescribeRepositories",
|
||||
"ecr:ListImages",
|
||||
"ecr:BatchGetImage"
|
||||
],
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
#Create AWS Instance Profiles
|
||||
|
||||
resource "aws_iam_instance_profile" "kube_control_plane" {
|
||||
name = "kube_${var.aws_cluster_name}_master_profile"
|
||||
role = aws_iam_role.kube_control_plane.name
|
||||
}
|
||||
|
||||
resource "aws_iam_instance_profile" "kube-worker" {
|
||||
name = "kube_${var.aws_cluster_name}_node_profile"
|
||||
role = aws_iam_role.kube-worker.name
|
||||
}
|
||||
7
contrib/terraform/aws/modules/iam/outputs.tf
Normal file
7
contrib/terraform/aws/modules/iam/outputs.tf
Normal file
@@ -0,0 +1,7 @@
|
||||
output "kube_control_plane-profile" {
|
||||
value = aws_iam_instance_profile.kube_control_plane.name
|
||||
}
|
||||
|
||||
output "kube-worker-profile" {
|
||||
value = aws_iam_instance_profile.kube-worker.name
|
||||
}
|
||||
3
contrib/terraform/aws/modules/iam/variables.tf
Normal file
3
contrib/terraform/aws/modules/iam/variables.tf
Normal file
@@ -0,0 +1,3 @@
|
||||
variable "aws_cluster_name" {
|
||||
description = "Name of Cluster"
|
||||
}
|
||||
41
contrib/terraform/aws/modules/nlb/main.tf
Normal file
41
contrib/terraform/aws/modules/nlb/main.tf
Normal file
@@ -0,0 +1,41 @@
|
||||
# Create a new AWS NLB for K8S API
|
||||
resource "aws_lb" "aws-nlb-api" {
|
||||
name = "kubernetes-nlb-${var.aws_cluster_name}"
|
||||
load_balancer_type = "network"
|
||||
subnets = length(var.aws_subnet_ids_public) <= length(var.aws_avail_zones) ? var.aws_subnet_ids_public : slice(var.aws_subnet_ids_public, 0, length(var.aws_avail_zones))
|
||||
idle_timeout = 400
|
||||
enable_cross_zone_load_balancing = true
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-nlb-api"
|
||||
}))
|
||||
}
|
||||
|
||||
# Create a new AWS NLB Instance Target Group
|
||||
resource "aws_lb_target_group" "aws-nlb-api-tg" {
|
||||
name = "kubernetes-nlb-tg-${var.aws_cluster_name}"
|
||||
port = var.k8s_secure_api_port
|
||||
protocol = "TCP"
|
||||
target_type = "ip"
|
||||
vpc_id = var.aws_vpc_id
|
||||
|
||||
health_check {
|
||||
healthy_threshold = 2
|
||||
unhealthy_threshold = 2
|
||||
interval = 30
|
||||
protocol = "HTTPS"
|
||||
path = "/healthz"
|
||||
}
|
||||
}
|
||||
|
||||
# Create a new AWS NLB Listener listen to target group
|
||||
resource "aws_lb_listener" "aws-nlb-api-listener" {
|
||||
load_balancer_arn = aws_lb.aws-nlb-api.arn
|
||||
port = var.aws_nlb_api_port
|
||||
protocol = "TCP"
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = aws_lb_target_group.aws-nlb-api-tg.arn
|
||||
}
|
||||
}
|
||||
11
contrib/terraform/aws/modules/nlb/outputs.tf
Normal file
11
contrib/terraform/aws/modules/nlb/outputs.tf
Normal file
@@ -0,0 +1,11 @@
|
||||
output "aws_nlb_api_id" {
|
||||
value = aws_lb.aws-nlb-api.id
|
||||
}
|
||||
|
||||
output "aws_nlb_api_fqdn" {
|
||||
value = aws_lb.aws-nlb-api.dns_name
|
||||
}
|
||||
|
||||
output "aws_nlb_api_tg_arn" {
|
||||
value = aws_lb_target_group.aws-nlb-api-tg.arn
|
||||
}
|
||||
30
contrib/terraform/aws/modules/nlb/variables.tf
Normal file
30
contrib/terraform/aws/modules/nlb/variables.tf
Normal file
@@ -0,0 +1,30 @@
|
||||
variable "aws_cluster_name" {
|
||||
description = "Name of Cluster"
|
||||
}
|
||||
|
||||
variable "aws_vpc_id" {
|
||||
description = "AWS VPC ID"
|
||||
}
|
||||
|
||||
variable "aws_nlb_api_port" {
|
||||
description = "Port for AWS NLB"
|
||||
}
|
||||
|
||||
variable "k8s_secure_api_port" {
|
||||
description = "Secure Port of K8S API Server"
|
||||
}
|
||||
|
||||
variable "aws_avail_zones" {
|
||||
description = "Availability Zones Used"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "aws_subnet_ids_public" {
|
||||
description = "IDs of Public Subnets"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "default_tags" {
|
||||
description = "Tags for all resources"
|
||||
type = map(string)
|
||||
}
|
||||
137
contrib/terraform/aws/modules/vpc/main.tf
Normal file
137
contrib/terraform/aws/modules/vpc/main.tf
Normal file
@@ -0,0 +1,137 @@
|
||||
resource "aws_vpc" "cluster-vpc" {
|
||||
cidr_block = var.aws_vpc_cidr_block
|
||||
|
||||
#DNS Related Entries
|
||||
enable_dns_support = true
|
||||
enable_dns_hostnames = true
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-vpc"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_eip" "cluster-nat-eip" {
|
||||
count = length(var.aws_cidr_subnets_public)
|
||||
vpc = true
|
||||
}
|
||||
|
||||
resource "aws_internet_gateway" "cluster-vpc-internetgw" {
|
||||
vpc_id = aws_vpc.cluster-vpc.id
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-internetgw"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_subnet" "cluster-vpc-subnets-public" {
|
||||
vpc_id = aws_vpc.cluster-vpc.id
|
||||
count = length(var.aws_cidr_subnets_public)
|
||||
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
|
||||
cidr_block = element(var.aws_cidr_subnets_public, count.index)
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-public"
|
||||
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
|
||||
"kubernetes.io/role/elb" = "1"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_nat_gateway" "cluster-nat-gateway" {
|
||||
count = length(var.aws_cidr_subnets_public)
|
||||
allocation_id = element(aws_eip.cluster-nat-eip.*.id, count.index)
|
||||
subnet_id = element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)
|
||||
}
|
||||
|
||||
resource "aws_subnet" "cluster-vpc-subnets-private" {
|
||||
vpc_id = aws_vpc.cluster-vpc.id
|
||||
count = length(var.aws_cidr_subnets_private)
|
||||
availability_zone = element(var.aws_avail_zones, count.index % length(var.aws_avail_zones))
|
||||
cidr_block = element(var.aws_cidr_subnets_private, count.index)
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-${element(var.aws_avail_zones, count.index)}-private"
|
||||
"kubernetes.io/cluster/${var.aws_cluster_name}" = "shared"
|
||||
"kubernetes.io/role/internal-elb" = "1"
|
||||
}))
|
||||
}
|
||||
|
||||
#Routing in VPC
|
||||
|
||||
#TODO: Do we need two routing tables for each subnet for redundancy or is one enough?
|
||||
|
||||
resource "aws_route_table" "kubernetes-public" {
|
||||
vpc_id = aws_vpc.cluster-vpc.id
|
||||
|
||||
route {
|
||||
cidr_block = "0.0.0.0/0"
|
||||
gateway_id = aws_internet_gateway.cluster-vpc-internetgw.id
|
||||
}
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-routetable-public"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_route_table" "kubernetes-private" {
|
||||
count = length(var.aws_cidr_subnets_private)
|
||||
vpc_id = aws_vpc.cluster-vpc.id
|
||||
|
||||
route {
|
||||
cidr_block = "0.0.0.0/0"
|
||||
nat_gateway_id = element(aws_nat_gateway.cluster-nat-gateway.*.id, count.index)
|
||||
}
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-routetable-private-${count.index}"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_route_table_association" "kubernetes-public" {
|
||||
count = length(var.aws_cidr_subnets_public)
|
||||
subnet_id = element(aws_subnet.cluster-vpc-subnets-public.*.id, count.index)
|
||||
route_table_id = aws_route_table.kubernetes-public.id
|
||||
}
|
||||
|
||||
resource "aws_route_table_association" "kubernetes-private" {
|
||||
count = length(var.aws_cidr_subnets_private)
|
||||
subnet_id = element(aws_subnet.cluster-vpc-subnets-private.*.id, count.index)
|
||||
route_table_id = element(aws_route_table.kubernetes-private.*.id, count.index)
|
||||
}
|
||||
|
||||
#Kubernetes Security Groups
|
||||
|
||||
resource "aws_security_group" "kubernetes" {
|
||||
name = "kubernetes-${var.aws_cluster_name}-securitygroup"
|
||||
vpc_id = aws_vpc.cluster-vpc.id
|
||||
|
||||
tags = merge(var.default_tags, tomap({
|
||||
Name = "kubernetes-${var.aws_cluster_name}-securitygroup"
|
||||
}))
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "allow-all-ingress" {
|
||||
type = "ingress"
|
||||
from_port = 0
|
||||
to_port = 65535
|
||||
protocol = "-1"
|
||||
cidr_blocks = [var.aws_vpc_cidr_block]
|
||||
security_group_id = aws_security_group.kubernetes.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "allow-all-egress" {
|
||||
type = "egress"
|
||||
from_port = 0
|
||||
to_port = 65535
|
||||
protocol = "-1"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
security_group_id = aws_security_group.kubernetes.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "allow-ssh-connections" {
|
||||
type = "ingress"
|
||||
from_port = 22
|
||||
to_port = 22
|
||||
protocol = "TCP"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
security_group_id = aws_security_group.kubernetes.id
|
||||
}
|
||||
19
contrib/terraform/aws/modules/vpc/outputs.tf
Normal file
19
contrib/terraform/aws/modules/vpc/outputs.tf
Normal file
@@ -0,0 +1,19 @@
|
||||
output "aws_vpc_id" {
|
||||
value = aws_vpc.cluster-vpc.id
|
||||
}
|
||||
|
||||
output "aws_subnet_ids_private" {
|
||||
value = aws_subnet.cluster-vpc-subnets-private.*.id
|
||||
}
|
||||
|
||||
output "aws_subnet_ids_public" {
|
||||
value = aws_subnet.cluster-vpc-subnets-public.*.id
|
||||
}
|
||||
|
||||
output "aws_security_group" {
|
||||
value = aws_security_group.kubernetes.*.id
|
||||
}
|
||||
|
||||
output "default_tags" {
|
||||
value = var.default_tags
|
||||
}
|
||||
27
contrib/terraform/aws/modules/vpc/variables.tf
Normal file
27
contrib/terraform/aws/modules/vpc/variables.tf
Normal file
@@ -0,0 +1,27 @@
|
||||
variable "aws_vpc_cidr_block" {
|
||||
description = "CIDR Blocks for AWS VPC"
|
||||
}
|
||||
|
||||
variable "aws_cluster_name" {
|
||||
description = "Name of Cluster"
|
||||
}
|
||||
|
||||
variable "aws_avail_zones" {
|
||||
description = "AWS Availability Zones Used"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "aws_cidr_subnets_private" {
|
||||
description = "CIDR Blocks for private subnets in Availability zones"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "aws_cidr_subnets_public" {
|
||||
description = "CIDR Blocks for public subnets in Availability zones"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "default_tags" {
|
||||
description = "Default tags for all resources"
|
||||
type = map(string)
|
||||
}
|
||||
27
contrib/terraform/aws/output.tf
Normal file
27
contrib/terraform/aws/output.tf
Normal file
@@ -0,0 +1,27 @@
|
||||
output "bastion_ip" {
|
||||
value = join("\n", aws_instance.bastion-server.*.public_ip)
|
||||
}
|
||||
|
||||
output "masters" {
|
||||
value = join("\n", aws_instance.k8s-master.*.private_ip)
|
||||
}
|
||||
|
||||
output "workers" {
|
||||
value = join("\n", aws_instance.k8s-worker.*.private_ip)
|
||||
}
|
||||
|
||||
output "etcd" {
|
||||
value = join("\n", ((var.aws_etcd_num > 0) ? (aws_instance.k8s-etcd.*.private_ip) : (aws_instance.k8s-master.*.private_ip)))
|
||||
}
|
||||
|
||||
output "aws_nlb_api_fqdn" {
|
||||
value = "${module.aws-nlb.aws_nlb_api_fqdn}:${var.aws_nlb_api_port}"
|
||||
}
|
||||
|
||||
output "inventory" {
|
||||
value = data.template_file.inventory.rendered
|
||||
}
|
||||
|
||||
output "default_tags" {
|
||||
value = var.default_tags
|
||||
}
|
||||
59
contrib/terraform/aws/sample-inventory/cluster.tfvars
Normal file
59
contrib/terraform/aws/sample-inventory/cluster.tfvars
Normal file
@@ -0,0 +1,59 @@
|
||||
#Global Vars
|
||||
aws_cluster_name = "devtest"
|
||||
|
||||
#VPC Vars
|
||||
aws_vpc_cidr_block = "10.250.192.0/18"
|
||||
|
||||
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
|
||||
|
||||
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
|
||||
|
||||
#Bastion Host
|
||||
aws_bastion_num = 1
|
||||
|
||||
aws_bastion_size = "t2.medium"
|
||||
|
||||
#Kubernetes Cluster
|
||||
|
||||
aws_kube_master_num = 3
|
||||
|
||||
aws_kube_master_size = "t2.medium"
|
||||
|
||||
aws_kube_master_disk_size = 50
|
||||
|
||||
aws_etcd_num = 3
|
||||
|
||||
aws_etcd_size = "t2.medium"
|
||||
|
||||
aws_etcd_disk_size = 50
|
||||
|
||||
aws_kube_worker_num = 4
|
||||
|
||||
aws_kube_worker_size = "t2.medium"
|
||||
|
||||
aws_kube_worker_disk_size = 50
|
||||
|
||||
#Settings AWS NLB
|
||||
|
||||
aws_nlb_api_port = 6443
|
||||
|
||||
k8s_secure_api_port = 6443
|
||||
|
||||
default_tags = {
|
||||
# Env = "devtest" # Product = "kubernetes"
|
||||
}
|
||||
|
||||
inventory_file = "../../../inventory/hosts"
|
||||
|
||||
## Credentials
|
||||
#AWS Access Key
|
||||
AWS_ACCESS_KEY_ID = ""
|
||||
|
||||
#AWS Secret Key
|
||||
AWS_SECRET_ACCESS_KEY = ""
|
||||
|
||||
#EC2 SSH Key Name
|
||||
AWS_SSH_KEY_NAME = ""
|
||||
|
||||
#AWS Region
|
||||
AWS_DEFAULT_REGION = "eu-central-1"
|
||||
1
contrib/terraform/aws/sample-inventory/group_vars
Symbolic link
1
contrib/terraform/aws/sample-inventory/group_vars
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../../inventory/sample/group_vars
|
||||
27
contrib/terraform/aws/templates/inventory.tpl
Normal file
27
contrib/terraform/aws/templates/inventory.tpl
Normal file
@@ -0,0 +1,27 @@
|
||||
[all]
|
||||
${connection_strings_master}
|
||||
${connection_strings_node}
|
||||
${connection_strings_etcd}
|
||||
${public_ip_address_bastion}
|
||||
|
||||
[bastion]
|
||||
${public_ip_address_bastion}
|
||||
|
||||
[kube_control_plane]
|
||||
${list_master}
|
||||
|
||||
[kube_node]
|
||||
${list_node}
|
||||
|
||||
[etcd]
|
||||
${list_etcd}
|
||||
|
||||
[calico_rr]
|
||||
|
||||
[k8s_cluster:children]
|
||||
kube_node
|
||||
kube_control_plane
|
||||
calico_rr
|
||||
|
||||
[k8s_cluster:vars]
|
||||
${nlb_api_fqdn}
|
||||
43
contrib/terraform/aws/terraform.tfvars
Normal file
43
contrib/terraform/aws/terraform.tfvars
Normal file
@@ -0,0 +1,43 @@
|
||||
#Global Vars
|
||||
aws_cluster_name = "devtest"
|
||||
|
||||
#VPC Vars
|
||||
aws_vpc_cidr_block = "10.250.192.0/18"
|
||||
aws_cidr_subnets_private = ["10.250.192.0/20", "10.250.208.0/20"]
|
||||
aws_cidr_subnets_public = ["10.250.224.0/20", "10.250.240.0/20"]
|
||||
|
||||
# single AZ deployment
|
||||
#aws_cidr_subnets_private = ["10.250.192.0/20"]
|
||||
#aws_cidr_subnets_public = ["10.250.224.0/20"]
|
||||
|
||||
# 3+ AZ deployment
|
||||
#aws_cidr_subnets_private = ["10.250.192.0/24","10.250.193.0/24","10.250.194.0/24","10.250.195.0/24"]
|
||||
#aws_cidr_subnets_public = ["10.250.224.0/24","10.250.225.0/24","10.250.226.0/24","10.250.227.0/24"]
|
||||
|
||||
#Bastion Host
|
||||
aws_bastion_num = 1
|
||||
aws_bastion_size = "t3.small"
|
||||
|
||||
#Kubernetes Cluster
|
||||
aws_kube_master_num = 3
|
||||
aws_kube_master_size = "t3.medium"
|
||||
aws_kube_master_disk_size = 50
|
||||
|
||||
aws_etcd_num = 0
|
||||
aws_etcd_size = "t3.medium"
|
||||
aws_etcd_disk_size = 50
|
||||
|
||||
aws_kube_worker_num = 4
|
||||
aws_kube_worker_size = "t3.medium"
|
||||
aws_kube_worker_disk_size = 50
|
||||
|
||||
#Settings AWS ELB
|
||||
aws_nlb_api_port = 6443
|
||||
k8s_secure_api_port = 6443
|
||||
|
||||
default_tags = {
|
||||
# Env = "devtest"
|
||||
# Product = "kubernetes"
|
||||
}
|
||||
|
||||
inventory_file = "../../../inventory/hosts"
|
||||
33
contrib/terraform/aws/terraform.tfvars.example
Normal file
33
contrib/terraform/aws/terraform.tfvars.example
Normal file
@@ -0,0 +1,33 @@
|
||||
#Global Vars
|
||||
aws_cluster_name = "devtest"
|
||||
|
||||
#VPC Vars
|
||||
aws_vpc_cidr_block = "10.250.192.0/18"
|
||||
aws_cidr_subnets_private = ["10.250.192.0/20","10.250.208.0/20"]
|
||||
aws_cidr_subnets_public = ["10.250.224.0/20","10.250.240.0/20"]
|
||||
aws_avail_zones = ["eu-central-1a","eu-central-1b"]
|
||||
|
||||
#Bastion Host
|
||||
aws_bastion_num = 1
|
||||
aws_bastion_size = "t3.small"
|
||||
|
||||
#Kubernetes Cluster
|
||||
aws_kube_master_num = 3
|
||||
aws_kube_master_size = "t3.medium"
|
||||
aws_kube_master_disk_size = 50
|
||||
|
||||
aws_etcd_num = 3
|
||||
aws_etcd_size = "t3.medium"
|
||||
aws_etcd_disk_size = 50
|
||||
|
||||
aws_kube_worker_num = 4
|
||||
aws_kube_worker_size = "t3.medium"
|
||||
aws_kube_worker_disk_size = 50
|
||||
|
||||
#Settings AWS ELB
|
||||
aws_nlb_api_port = 6443
|
||||
k8s_secure_api_port = 6443
|
||||
|
||||
default_tags = { }
|
||||
|
||||
inventory_file = "../../../inventory/hosts"
|
||||
143
contrib/terraform/aws/variables.tf
Normal file
143
contrib/terraform/aws/variables.tf
Normal file
@@ -0,0 +1,143 @@
|
||||
variable "AWS_ACCESS_KEY_ID" {
|
||||
description = "AWS Access Key"
|
||||
}
|
||||
|
||||
variable "AWS_SECRET_ACCESS_KEY" {
|
||||
description = "AWS Secret Key"
|
||||
}
|
||||
|
||||
variable "AWS_SSH_KEY_NAME" {
|
||||
description = "Name of the SSH keypair to use in AWS."
|
||||
}
|
||||
|
||||
variable "AWS_DEFAULT_REGION" {
|
||||
description = "AWS Region"
|
||||
}
|
||||
|
||||
//General Cluster Settings
|
||||
|
||||
variable "aws_cluster_name" {
|
||||
description = "Name of AWS Cluster"
|
||||
}
|
||||
|
||||
variable "ami_name_pattern" {
|
||||
description = "The name pattern to use for AMI lookup"
|
||||
type = string
|
||||
default = "debian-10-amd64-*"
|
||||
}
|
||||
|
||||
variable "ami_virtualization_type" {
|
||||
description = "The virtualization type to use for AMI lookup"
|
||||
type = string
|
||||
default = "hvm"
|
||||
}
|
||||
|
||||
variable "ami_owners" {
|
||||
description = "The owners to use for AMI lookup"
|
||||
type = list(string)
|
||||
default = ["136693071363"]
|
||||
}
|
||||
|
||||
data "aws_ami" "distro" {
|
||||
most_recent = true
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = [var.ami_name_pattern]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = [var.ami_virtualization_type]
|
||||
}
|
||||
|
||||
owners = var.ami_owners
|
||||
}
|
||||
|
||||
//AWS VPC Variables
|
||||
|
||||
variable "aws_vpc_cidr_block" {
|
||||
description = "CIDR Block for VPC"
|
||||
}
|
||||
|
||||
variable "aws_cidr_subnets_private" {
|
||||
description = "CIDR Blocks for private subnets in Availability Zones"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "aws_cidr_subnets_public" {
|
||||
description = "CIDR Blocks for public subnets in Availability Zones"
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
//AWS EC2 Settings
|
||||
|
||||
variable "aws_bastion_size" {
|
||||
description = "EC2 Instance Size of Bastion Host"
|
||||
}
|
||||
|
||||
/*
|
||||
* AWS EC2 Settings
|
||||
* The number should be divisable by the number of used
|
||||
* AWS Availability Zones without an remainder.
|
||||
*/
|
||||
variable "aws_bastion_num" {
|
||||
description = "Number of Bastion Nodes"
|
||||
}
|
||||
|
||||
variable "aws_kube_master_num" {
|
||||
description = "Number of Kubernetes Master Nodes"
|
||||
}
|
||||
|
||||
variable "aws_kube_master_disk_size" {
|
||||
description = "Disk size for Kubernetes Master Nodes (in GiB)"
|
||||
}
|
||||
|
||||
variable "aws_kube_master_size" {
|
||||
description = "Instance size of Kube Master Nodes"
|
||||
}
|
||||
|
||||
variable "aws_etcd_num" {
|
||||
description = "Number of etcd Nodes"
|
||||
}
|
||||
|
||||
variable "aws_etcd_disk_size" {
|
||||
description = "Disk size for etcd Nodes (in GiB)"
|
||||
}
|
||||
|
||||
variable "aws_etcd_size" {
|
||||
description = "Instance size of etcd Nodes"
|
||||
}
|
||||
|
||||
variable "aws_kube_worker_num" {
|
||||
description = "Number of Kubernetes Worker Nodes"
|
||||
}
|
||||
|
||||
variable "aws_kube_worker_disk_size" {
|
||||
description = "Disk size for Kubernetes Worker Nodes (in GiB)"
|
||||
}
|
||||
|
||||
variable "aws_kube_worker_size" {
|
||||
description = "Instance size of Kubernetes Worker Nodes"
|
||||
}
|
||||
|
||||
/*
|
||||
* AWS NLB Settings
|
||||
*
|
||||
*/
|
||||
variable "aws_nlb_api_port" {
|
||||
description = "Port for AWS NLB"
|
||||
}
|
||||
|
||||
variable "k8s_secure_api_port" {
|
||||
description = "Secure Port of K8S API Server"
|
||||
}
|
||||
|
||||
variable "default_tags" {
|
||||
description = "Default tags for all resources"
|
||||
type = map(string)
|
||||
}
|
||||
|
||||
variable "inventory_file" {
|
||||
description = "Where to store the generated inventory file"
|
||||
}
|
||||
Reference in New Issue
Block a user