kubespray 2.24 추가
This commit is contained in:
5
contrib/terraform/openstack/.gitignore
vendored
Normal file
5
contrib/terraform/openstack/.gitignore
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
.terraform
|
||||
*.tfvars
|
||||
!sample-inventory/cluster.tfvars
|
||||
*.tfstate
|
||||
*.tfstate.backup
|
||||
786
contrib/terraform/openstack/README.md
Normal file
786
contrib/terraform/openstack/README.md
Normal file
@@ -0,0 +1,786 @@
|
||||
# Kubernetes on OpenStack with Terraform
|
||||
|
||||
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
|
||||
OpenStack.
|
||||
|
||||
## Status
|
||||
|
||||
This will install a Kubernetes cluster on an OpenStack Cloud. It should work on
|
||||
most modern installs of OpenStack that support the basic services.
|
||||
|
||||
### Known compatible public clouds
|
||||
|
||||
- [Auro](https://auro.io/)
|
||||
- [Betacloud](https://www.betacloud.io/)
|
||||
- [CityCloud](https://www.citycloud.com/)
|
||||
- [DreamHost](https://www.dreamhost.com/cloud/computing/)
|
||||
- [ELASTX](https://elastx.se/)
|
||||
- [EnterCloudSuite](https://www.entercloudsuite.com/)
|
||||
- [FugaCloud](https://fuga.cloud/)
|
||||
- [Open Telekom Cloud](https://cloud.telekom.de/)
|
||||
- [OVH](https://www.ovh.com/)
|
||||
- [Rackspace](https://www.rackspace.com/)
|
||||
- [Safespring](https://www.safespring.com)
|
||||
- [Ultimum](https://ultimum.io/)
|
||||
- [VexxHost](https://vexxhost.com/)
|
||||
- [Zetta](https://www.zetta.io/)
|
||||
|
||||
## Approach
|
||||
|
||||
The terraform configuration inspects variables found in
|
||||
[variables.tf](variables.tf) to create resources in your OpenStack cluster.
|
||||
There is a [python script](../terraform.py) that reads the generated`.tfstate`
|
||||
file to generate a dynamic inventory that is consumed by the main ansible script
|
||||
to actually install kubernetes and stand up the cluster.
|
||||
|
||||
### Networking
|
||||
|
||||
The configuration includes creating a private subnet with a router to the
|
||||
external net. It will allocate floating IPs from a pool and assign them to the
|
||||
hosts where that makes sense. You have the option of creating bastion hosts
|
||||
inside the private subnet to access the nodes there. Alternatively, a node with
|
||||
a floating IP can be used as a jump host to nodes without.
|
||||
|
||||
#### Using an existing router
|
||||
|
||||
It is possible to use an existing router instead of creating one. To use an
|
||||
existing router set the router\_id variable to the uuid of the router you wish
|
||||
to use.
|
||||
|
||||
For example:
|
||||
|
||||
```ShellSession
|
||||
router_id = "00c542e7-6f46-4535-ae95-984c7f0391a3"
|
||||
```
|
||||
|
||||
### Kubernetes Nodes
|
||||
|
||||
You can create many different kubernetes topologies by setting the number of
|
||||
different classes of hosts. For each class there are options for allocating
|
||||
floating IP addresses or not.
|
||||
|
||||
- Master nodes with etcd
|
||||
- Master nodes without etcd
|
||||
- Standalone etcd hosts
|
||||
- Kubernetes worker nodes
|
||||
|
||||
Note that the Ansible script will report an invalid configuration if you wind up
|
||||
with an even number of etcd instances since that is not a valid configuration. This
|
||||
restriction includes standalone etcd nodes that are deployed in a cluster along with
|
||||
master nodes with etcd replicas. As an example, if you have three master nodes with
|
||||
etcd replicas and three standalone etcd nodes, the script will fail since there are
|
||||
now six total etcd replicas.
|
||||
|
||||
### GlusterFS shared file system
|
||||
|
||||
The Terraform configuration supports provisioning of an optional GlusterFS
|
||||
shared file system based on a separate set of VMs. To enable this, you need to
|
||||
specify:
|
||||
|
||||
- the number of Gluster hosts (minimum 2)
|
||||
- Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
|
||||
- Other properties related to provisioning the hosts
|
||||
|
||||
Even if you are using Flatcar Container Linux by Kinvolk for your cluster, you will still
|
||||
need the GlusterFS VMs to be based on either Debian or RedHat based images.
|
||||
Flatcar Container Linux by Kinvolk cannot serve GlusterFS, but can connect to it through
|
||||
binaries available on hyperkube v1.4.3_coreos.0 or higher.
|
||||
|
||||
## Requirements
|
||||
|
||||
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) 0.14 or later
|
||||
- [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
|
||||
- you already have a suitable OS image in Glance
|
||||
- you already have a floating IP pool created
|
||||
- you have security groups enabled
|
||||
- you have a pair of keys generated that can be used to secure the new hosts
|
||||
|
||||
## Module Architecture
|
||||
|
||||
The configuration is divided into three modules:
|
||||
|
||||
- Network
|
||||
- IPs
|
||||
- Compute
|
||||
|
||||
The main reason for splitting the configuration up in this way is to easily
|
||||
accommodate situations where floating IPs are limited by a quota or if you have
|
||||
any external references to the floating IP (e.g. DNS) that would otherwise have
|
||||
to be updated.
|
||||
|
||||
You can force your existing IPs by modifying the compute variables in
|
||||
`kubespray.tf` as follows:
|
||||
|
||||
```ini
|
||||
k8s_master_fips = ["151.101.129.67"]
|
||||
k8s_node_fips = ["151.101.129.68"]
|
||||
```
|
||||
|
||||
## Terraform
|
||||
|
||||
Terraform will be used to provision all of the OpenStack resources with base software as appropriate.
|
||||
|
||||
### Configuration
|
||||
|
||||
#### Inventory files
|
||||
|
||||
Create an inventory directory for your cluster by copying the existing sample and linking the `hosts` script (used to build the inventory based on Terraform state):
|
||||
|
||||
```ShellSession
|
||||
cp -LRp contrib/terraform/openstack/sample-inventory inventory/$CLUSTER
|
||||
cd inventory/$CLUSTER
|
||||
ln -s ../../contrib/terraform/openstack/hosts
|
||||
ln -s ../../contrib
|
||||
```
|
||||
|
||||
This will be the base for subsequent Terraform commands.
|
||||
|
||||
#### OpenStack access and credentials
|
||||
|
||||
No provider variables are hardcoded inside `variables.tf` because Terraform
|
||||
supports various authentication methods for OpenStack: the older script and
|
||||
environment method (using `openrc`) as well as a newer declarative method, and
|
||||
different OpenStack environments may support Identity API version 2 or 3.
|
||||
|
||||
These are examples and may vary depending on your OpenStack cloud provider,
|
||||
for an exhaustive list on how to authenticate on OpenStack with Terraform
|
||||
please read the [OpenStack provider documentation](https://www.terraform.io/docs/providers/openstack/).
|
||||
|
||||
##### Declarative method (recommended)
|
||||
|
||||
The recommended authentication method is to describe credentials in a YAML file `clouds.yaml` that can be stored in:
|
||||
|
||||
- the current directory
|
||||
- `~/.config/openstack`
|
||||
- `/etc/openstack`
|
||||
|
||||
`clouds.yaml`:
|
||||
|
||||
```yaml
|
||||
clouds:
|
||||
mycloud:
|
||||
auth:
|
||||
auth_url: https://openstack:5000/v3
|
||||
username: "username"
|
||||
project_name: "projectname"
|
||||
project_id: projectid
|
||||
user_domain_name: "Default"
|
||||
password: "password"
|
||||
region_name: "RegionOne"
|
||||
interface: "public"
|
||||
identity_api_version: 3
|
||||
```
|
||||
|
||||
If you have multiple clouds defined in your `clouds.yaml` file you can choose
|
||||
the one you want to use with the environment variable `OS_CLOUD`:
|
||||
|
||||
```ShellSession
|
||||
export OS_CLOUD=mycloud
|
||||
```
|
||||
|
||||
##### Openrc method
|
||||
|
||||
When using classic environment variables, Terraform uses default `OS_*`
|
||||
environment variables. A script suitable for your environment may be available
|
||||
from Horizon under *Project* -> *Compute* -> *Access & Security* -> *API Access*.
|
||||
|
||||
With identity v2:
|
||||
|
||||
```ShellSession
|
||||
source openrc
|
||||
|
||||
env | grep OS
|
||||
|
||||
OS_AUTH_URL=https://openstack:5000/v2.0
|
||||
OS_PROJECT_ID=projectid
|
||||
OS_PROJECT_NAME=projectname
|
||||
OS_USERNAME=username
|
||||
OS_PASSWORD=password
|
||||
OS_REGION_NAME=RegionOne
|
||||
OS_INTERFACE=public
|
||||
OS_IDENTITY_API_VERSION=2
|
||||
```
|
||||
|
||||
With identity v3:
|
||||
|
||||
```ShellSession
|
||||
source openrc
|
||||
|
||||
env | grep OS
|
||||
|
||||
OS_AUTH_URL=https://openstack:5000/v3
|
||||
OS_PROJECT_ID=projectid
|
||||
OS_PROJECT_NAME=username
|
||||
OS_PROJECT_DOMAIN_ID=default
|
||||
OS_USERNAME=username
|
||||
OS_PASSWORD=password
|
||||
OS_REGION_NAME=RegionOne
|
||||
OS_INTERFACE=public
|
||||
OS_IDENTITY_API_VERSION=3
|
||||
OS_USER_DOMAIN_NAME=Default
|
||||
```
|
||||
|
||||
Terraform does not support a mix of DomainName and DomainID, choose one or the other:
|
||||
|
||||
- provider.openstack: You must provide exactly one of DomainID or DomainName to authenticate by Username
|
||||
|
||||
```ShellSession
|
||||
unset OS_USER_DOMAIN_NAME
|
||||
export OS_USER_DOMAIN_ID=default
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```ShellSession
|
||||
unset OS_PROJECT_DOMAIN_ID
|
||||
set OS_PROJECT_DOMAIN_NAME=Default
|
||||
```
|
||||
|
||||
#### Cluster variables
|
||||
|
||||
The construction of the cluster is driven by values found in
|
||||
[variables.tf](variables.tf).
|
||||
|
||||
For your cluster, edit `inventory/$CLUSTER/cluster.tfvars`.
|
||||
|
||||
|Variable | Description |
|
||||
|---------|-------------|
|
||||
|`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
|
||||
|`az_list` | List of Availability Zones available in your OpenStack cluster. |
|
||||
|`network_name` | The name to be given to the internal network that will be generated |
|
||||
|`use_existing_network`| Use an existing network with the name of `network_name`. `false` by default |
|
||||
|`network_dns_domain` | (Optional) The dns_domain for the internal network that will be generated |
|
||||
|`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
|
||||
|`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
|
||||
|`k8s_master_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to master nodes instead of creating new random floating IPs. |
|
||||
|`bastion_fips` | A list of floating IPs that you have already pre-allocated; they will be attached to bastion node instead of creating new random floating IPs. |
|
||||
|`external_net` | UUID of the external network that will be routed to |
|
||||
|`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through `openstack flavor list` |
|
||||
|`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
|
||||
|`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
|
||||
|`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
|
||||
|`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
|
||||
|`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just master with no etcd. These can be provisioned with or without floating IP addresses |
|
||||
|`number_of_etcd` | Number of pure etcd nodes |
|
||||
|`number_of_k8s_nodes`, `number_of_k8s_nodes_no_floating_ip` | Kubernetes worker nodes. These can be provisioned with or without floating ip addresses. |
|
||||
|`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
|
||||
|`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
|
||||
| `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
|
||||
|`supplementary_master_groups` | To add ansible groups to the masters, such as `kube_node` for tainting them as nodes, empty by default. |
|
||||
|`supplementary_node_groups` | To add ansible groups to the nodes, such as `kube_ingress` for running ingress controller pods, empty by default. |
|
||||
|`bastion_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, `["0.0.0.0/0"]` by default |
|
||||
|`master_allowed_remote_ips` | List of CIDR blocks allowed to initiate an API connection, `["0.0.0.0/0"]` by default |
|
||||
|`bastion_allowed_ports` | List of ports to open on bastion node, `[]` by default |
|
||||
|`k8s_allowed_remote_ips` | List of CIDR allowed to initiate a SSH connection, empty by default |
|
||||
|`worker_allowed_ports` | List of ports to open on worker nodes, `[{ "protocol" = "tcp", "port_range_min" = 30000, "port_range_max" = 32767, "remote_ip_prefix" = "0.0.0.0/0"}]` by default |
|
||||
|`master_allowed_ports` | List of ports to open on master nodes, expected format is `[{ "protocol" = "tcp", "port_range_min" = 443, "port_range_max" = 443, "remote_ip_prefix" = "0.0.0.0/0"}]`, empty by default |
|
||||
|`node_root_volume_size_in_gb` | Size of the root volume for nodes, 0 to use ephemeral storage |
|
||||
|`master_root_volume_size_in_gb` | Size of the root volume for masters, 0 to use ephemeral storage |
|
||||
|`master_volume_type` | Volume type of the root volume for control_plane, 'Default' by default |
|
||||
|`node_volume_type` | Volume type of the root volume for nodes, 'Default' by default |
|
||||
|`gfs_root_volume_size_in_gb` | Size of the root volume for gluster, 0 to use ephemeral storage |
|
||||
|`etcd_root_volume_size_in_gb` | Size of the root volume for etcd nodes, 0 to use ephemeral storage |
|
||||
|`bastion_root_volume_size_in_gb` | Size of the root volume for bastions, 0 to use ephemeral storage |
|
||||
|`master_server_group_policy` | Enable and use openstack nova servergroups for masters with set policy, default: "" (disabled) |
|
||||
|`node_server_group_policy` | Enable and use openstack nova servergroups for nodes with set policy, default: "" (disabled) |
|
||||
|`etcd_server_group_policy` | Enable and use openstack nova servergroups for etcd with set policy, default: "" (disabled) |
|
||||
|`additional_server_groups` | Extra server groups to create. Set "policy" to the policy for the group, expected format is `{"new-server-group" = {"policy" = "anti-affinity"}}`, default: {} (to not create any extra groups) |
|
||||
|`use_access_ip` | If 1, nodes with floating IPs will transmit internal cluster traffic via floating IPs; if 0 private IPs will be used instead. Default value is 1. |
|
||||
|`port_security_enabled` | Allow to disable port security by setting this to `false`. `true` by default |
|
||||
|`force_null_port_security` | Set `null` instead of `true` or `false` for `port_security`. `false` by default |
|
||||
|`k8s_nodes` | Map containing worker node definition, see explanation below |
|
||||
|`k8s_masters` | Map containing master node definition, see explanation for k8s_nodes and `sample-inventory/cluster.tfvars` |
|
||||
|
||||
##### k8s_nodes
|
||||
|
||||
Allows a custom definition of worker nodes giving the operator full control over individual node flavor and availability zone placement.
|
||||
To enable the use of this mode set the `number_of_k8s_nodes` and `number_of_k8s_nodes_no_floating_ip` variables to 0.
|
||||
Then define your desired worker node configuration using the `k8s_nodes` variable.
|
||||
The `az`, `flavor` and `floating_ip` parameters are mandatory.
|
||||
The optional parameter `extra_groups` (a comma-delimited string) can be used to define extra inventory group memberships for specific nodes.
|
||||
|
||||
```yaml
|
||||
k8s_nodes:
|
||||
node-name:
|
||||
az: string # Name of the AZ
|
||||
flavor: string # Flavor ID to use
|
||||
floating_ip: bool # If floating IPs should be created or not
|
||||
extra_groups: string # (optional) Additional groups to add for kubespray, defaults to no groups
|
||||
image_id: string # (optional) Image ID to use, defaults to var.image_id or var.image
|
||||
root_volume_size_in_gb: number # (optional) Size of the block storage to use as root disk, defaults to var.node_root_volume_size_in_gb or to use volume from flavor otherwise
|
||||
volume_type: string # (optional) Volume type to use, defaults to var.node_volume_type
|
||||
network_id: string # (optional) Use this network_id for the node, defaults to either var.network_id or ID of var.network_name
|
||||
server_group: string # (optional) Server group to add this node to. If set, this has to be one specified in additional_server_groups, defaults to use the server group specified in node_server_group_policy
|
||||
cloudinit: # (optional) Options for cloud-init
|
||||
extra_partitions: # List of extra partitions (other than the root partition) to setup during creation
|
||||
volume_path: string # Path to the volume to create partition for (e.g. /dev/vda )
|
||||
partition_path: string # Path to the partition (e.g. /dev/vda2 )
|
||||
mount_path: string # Path to where the partition should be mounted
|
||||
partition_start: string # Where the partition should start (e.g. 10GB ). Note, if you set the partition_start to 0 there will be no space left for the root partition
|
||||
partition_end: string # Where the partition should end (e.g. 10GB or -1 for end of volume)
|
||||
netplan_critical_dhcp_interface: string # Name of interface to set the dhcp flag critical = true, to circumvent [this issue](https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1776013).
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```ini
|
||||
k8s_nodes = {
|
||||
"1" = {
|
||||
"az" = "sto1"
|
||||
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||
"floating_ip" = true
|
||||
},
|
||||
"2" = {
|
||||
"az" = "sto2"
|
||||
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||
"floating_ip" = true
|
||||
},
|
||||
"3" = {
|
||||
"az" = "sto3"
|
||||
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||
"floating_ip" = true
|
||||
"extra_groups" = "calico_rr"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Would result in the same configuration as:
|
||||
|
||||
```ini
|
||||
number_of_k8s_nodes = 3
|
||||
flavor_k8s_node = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||
az_list = ["sto1", "sto2", "sto3"]
|
||||
```
|
||||
|
||||
And:
|
||||
|
||||
```ini
|
||||
k8s_nodes = {
|
||||
"ing-1" = {
|
||||
"az" = "sto1"
|
||||
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||
"floating_ip" = true
|
||||
},
|
||||
"ing-2" = {
|
||||
"az" = "sto2"
|
||||
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||
"floating_ip" = true
|
||||
},
|
||||
"ing-3" = {
|
||||
"az" = "sto3"
|
||||
"flavor" = "83d8b44a-26a0-4f02-a981-079446926445"
|
||||
"floating_ip" = true
|
||||
},
|
||||
"big-1" = {
|
||||
"az" = "sto1"
|
||||
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
|
||||
"floating_ip" = false
|
||||
},
|
||||
"big-2" = {
|
||||
"az" = "sto2"
|
||||
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
|
||||
"floating_ip" = false
|
||||
},
|
||||
"big-3" = {
|
||||
"az" = "sto3"
|
||||
"flavor" = "3f73fc93-ec61-4808-88df-2580d94c1a9b"
|
||||
"floating_ip" = false
|
||||
},
|
||||
"small-1" = {
|
||||
"az" = "sto1"
|
||||
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
|
||||
"floating_ip" = false
|
||||
},
|
||||
"small-2" = {
|
||||
"az" = "sto2"
|
||||
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
|
||||
"floating_ip" = false
|
||||
},
|
||||
"small-3" = {
|
||||
"az" = "sto3"
|
||||
"flavor" = "7a6a998f-ac7f-4fb8-a534-2175b254f75e"
|
||||
"floating_ip" = false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Would result in three nodes in each availability zone each with their own separate naming,
|
||||
flavor and floating ip configuration.
|
||||
|
||||
The "schema":
|
||||
|
||||
```ini
|
||||
k8s_nodes = {
|
||||
"key | node name suffix, must be unique" = {
|
||||
"az" = string
|
||||
"flavor" = string
|
||||
"floating_ip" = bool
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
All values are required.
|
||||
|
||||
#### Terraform state files
|
||||
|
||||
In the cluster's inventory folder, the following files might be created (either by Terraform
|
||||
or manually), to prevent you from pushing them accidentally they are in a
|
||||
`.gitignore` file in the `terraform/openstack` directory :
|
||||
|
||||
- `.terraform`
|
||||
- `.tfvars`
|
||||
- `.tfstate`
|
||||
- `.tfstate.backup`
|
||||
|
||||
You can still add them manually if you want to.
|
||||
|
||||
### Initialization
|
||||
|
||||
Before Terraform can operate on your cluster you need to install the required
|
||||
plugins. This is accomplished as follows:
|
||||
|
||||
```ShellSession
|
||||
cd inventory/$CLUSTER
|
||||
terraform -chdir="../../contrib/terraform/openstack" init
|
||||
```
|
||||
|
||||
This should finish fairly quickly telling you Terraform has successfully initialized and loaded necessary modules.
|
||||
|
||||
### Customizing with cloud-init
|
||||
|
||||
You can apply cloud-init based customization for the openstack instances before provisioning your cluster.
|
||||
One common template is used for all instances. Adjust the file shown below:
|
||||
`contrib/terraform/openstack/modules/compute/templates/cloudinit.yaml.tmpl`
|
||||
For example, to enable openstack novnc access and ansible_user=root SSH access:
|
||||
|
||||
```ShellSession
|
||||
#cloud-config
|
||||
## in some cases novnc console access is required
|
||||
## it requires ssh password to be set
|
||||
ssh_pwauth: yes
|
||||
chpasswd:
|
||||
list: |
|
||||
root:secret
|
||||
expire: False
|
||||
|
||||
## in some cases direct root ssh access via ssh key is required
|
||||
disable_root: false
|
||||
```
|
||||
|
||||
### Provisioning cluster
|
||||
|
||||
You can apply the Terraform configuration to your cluster with the following command
|
||||
issued from your cluster's inventory directory (`inventory/$CLUSTER`):
|
||||
|
||||
```ShellSession
|
||||
terraform -chdir="../../contrib/terraform/openstack" apply -var-file=cluster.tfvars
|
||||
```
|
||||
|
||||
if you chose to create a bastion host, this script will create
|
||||
`contrib/terraform/openstack/k8s_cluster.yml` with an ssh command for Ansible to
|
||||
be able to access your machines tunneling through the bastion's IP address. If
|
||||
you want to manually handle the ssh tunneling to these machines, please delete
|
||||
or move that file. If you want to use this, just leave it there, as ansible will
|
||||
pick it up automatically.
|
||||
|
||||
### Destroying cluster
|
||||
|
||||
You can destroy your new cluster with the following command issued from the cluster's inventory directory:
|
||||
|
||||
```ShellSession
|
||||
terraform -chdir="../../contrib/terraform/openstack" destroy -var-file=cluster.tfvars
|
||||
```
|
||||
|
||||
If you've started the Ansible run, it may also be a good idea to do some manual cleanup:
|
||||
|
||||
- remove SSH keys from the destroyed cluster from your `~/.ssh/known_hosts` file
|
||||
- clean up any temporary cache files: `rm /tmp/$CLUSTER-*`
|
||||
|
||||
### Debugging
|
||||
|
||||
You can enable debugging output from Terraform by setting
|
||||
`OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before running the Terraform command.
|
||||
|
||||
### Terraform output
|
||||
|
||||
Terraform can output values that are useful for configure Neutron/Octavia LBaaS or Cinder persistent volume provisioning as part of your Kubernetes deployment:
|
||||
|
||||
- `private_subnet_id`: the subnet where your instances are running is used for `openstack_lbaas_subnet_id`
|
||||
- `floating_network_id`: the network_id where the floating IP are provisioned is used for `openstack_lbaas_floating_network_id`
|
||||
|
||||
## Ansible
|
||||
|
||||
### Node access
|
||||
|
||||
#### SSH
|
||||
|
||||
Ensure your local ssh-agent is running and your ssh key has been added. This
|
||||
step is required by the terraform provisioner:
|
||||
|
||||
```ShellSession
|
||||
eval $(ssh-agent -s)
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
```
|
||||
|
||||
If you have deployed and destroyed a previous iteration of your cluster, you will need to clear out any stale keys from your SSH "known hosts" file ( `~/.ssh/known_hosts`).
|
||||
|
||||
#### Metadata variables
|
||||
|
||||
The [python script](../terraform.py) that reads the
|
||||
generated`.tfstate` file to generate a dynamic inventory recognizes
|
||||
some variables within a "metadata" block, defined in a "resource"
|
||||
block (example):
|
||||
|
||||
```ini
|
||||
resource "openstack_compute_instance_v2" "example" {
|
||||
...
|
||||
metadata {
|
||||
ssh_user = "ubuntu"
|
||||
prefer_ipv6 = true
|
||||
python_bin = "/usr/bin/python3"
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
As the example shows, these let you define the SSH username for
|
||||
Ansible, a Python binary which is needed by Ansible if
|
||||
`/usr/bin/python` doesn't exist, and whether the IPv6 address of the
|
||||
instance should be preferred over IPv4.
|
||||
|
||||
#### Bastion host
|
||||
|
||||
Bastion access will be determined by:
|
||||
|
||||
- Your choice on the amount of bastion hosts (set by `number_of_bastions` terraform variable).
|
||||
- The existence of nodes/masters with floating IPs (set by `number_of_k8s_masters`, `number_of_k8s_nodes`, `number_of_k8s_masters_no_etcd` terraform variables).
|
||||
|
||||
If you have a bastion host, your ssh traffic will be directly routed through it. This is regardless of whether you have masters/nodes with a floating IP assigned.
|
||||
If you don't have a bastion host, but at least one of your masters/nodes have a floating IP, then ssh traffic will be tunneled by one of these machines.
|
||||
|
||||
So, either a bastion host, or at least master/node with a floating IP are required.
|
||||
|
||||
#### Test access
|
||||
|
||||
Make sure you can connect to the hosts. Note that Flatcar Container Linux by Kinvolk will have a state `FAILED` due to Python not being present. This is okay, because Python will be installed during bootstrapping, so long as the hosts are not `UNREACHABLE`.
|
||||
|
||||
```ShellSession
|
||||
$ ansible -i inventory/$CLUSTER/hosts -m ping all
|
||||
example-k8s_node-1 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
example-etcd-1 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
example-k8s-master-1 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
```
|
||||
|
||||
If it fails try to connect manually via SSH. It could be something as simple as a stale host key.
|
||||
|
||||
### Configure cluster variables
|
||||
|
||||
Edit `inventory/$CLUSTER/group_vars/all/all.yml`:
|
||||
|
||||
- **bin_dir**:
|
||||
|
||||
```yml
|
||||
# Directory where the binaries will be installed
|
||||
# Default:
|
||||
# bin_dir: /usr/local/bin
|
||||
# For Flatcar Container Linux by Kinvolk:
|
||||
bin_dir: /opt/bin
|
||||
```
|
||||
|
||||
- and **cloud_provider**:
|
||||
|
||||
```yml
|
||||
cloud_provider: openstack
|
||||
```
|
||||
|
||||
Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml`:
|
||||
|
||||
- Set variable **kube_network_plugin** to your desired networking plugin.
|
||||
- **flannel** works out-of-the-box
|
||||
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
|
||||
|
||||
```yml
|
||||
# Choose network plugin (calico, weave or flannel)
|
||||
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
|
||||
kube_network_plugin: flannel
|
||||
```
|
||||
|
||||
- Set variable **resolvconf_mode**
|
||||
|
||||
```yml
|
||||
# Can be docker_dns, host_resolvconf or none
|
||||
# Default:
|
||||
# resolvconf_mode: docker_dns
|
||||
# For Flatcar Container Linux by Kinvolk:
|
||||
resolvconf_mode: host_resolvconf
|
||||
```
|
||||
|
||||
- Set max amount of attached cinder volume per host (default 256)
|
||||
|
||||
```yml
|
||||
node_volume_attach_limit: 26
|
||||
```
|
||||
|
||||
### Deploy Kubernetes
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook --become -i inventory/$CLUSTER/hosts cluster.yml
|
||||
```
|
||||
|
||||
This will take some time as there are many tasks to run.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
### Set up kubectl
|
||||
|
||||
1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your workstation
|
||||
2. Add a route to the internal IP of a master node (if needed):
|
||||
|
||||
```ShellSession
|
||||
sudo route add [master-internal-ip] gw [router-ip]
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```ShellSession
|
||||
sudo route add -net [internal-subnet]/24 gw [router-ip]
|
||||
```
|
||||
|
||||
1. List Kubernetes certificates & keys:
|
||||
|
||||
```ShellSession
|
||||
ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
|
||||
```
|
||||
|
||||
1. Get `admin`'s certificates and keys:
|
||||
|
||||
```ShellSession
|
||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1-key.pem > admin-key.pem
|
||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-kube-master-1.pem > admin.pem
|
||||
ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
|
||||
```
|
||||
|
||||
1. Configure kubectl:
|
||||
|
||||
```ShellSession
|
||||
$ kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
|
||||
--certificate-authority=ca.pem
|
||||
|
||||
$ kubectl config set-credentials default-admin \
|
||||
--certificate-authority=ca.pem \
|
||||
--client-key=admin-key.pem \
|
||||
--client-certificate=admin.pem
|
||||
|
||||
$ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
|
||||
$ kubectl config use-context default-system
|
||||
```
|
||||
|
||||
1. Check it:
|
||||
|
||||
```ShellSession
|
||||
kubectl version
|
||||
```
|
||||
|
||||
## GlusterFS
|
||||
|
||||
GlusterFS is not deployed by the standard `cluster.yml` playbook, see the
|
||||
[GlusterFS playbook documentation](../../network-storage/glusterfs/README.md)
|
||||
for instructions.
|
||||
|
||||
Basically you will install Gluster as
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook --become -i inventory/$CLUSTER/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
|
||||
```
|
||||
|
||||
## What's next
|
||||
|
||||
Try out your new Kubernetes cluster with the [Hello Kubernetes service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/).
|
||||
|
||||
## Appendix
|
||||
|
||||
### Migration from `number_of_k8s_nodes*` to `k8s_nodes`
|
||||
|
||||
If you currently have a cluster defined using the `number_of_k8s_nodes*` variables and wish
|
||||
to migrate to the `k8s_nodes` style you can do it like so:
|
||||
|
||||
```ShellSession
|
||||
$ terraform state list
|
||||
module.compute.data.openstack_images_image_v2.gfs_image
|
||||
module.compute.data.openstack_images_image_v2.vm_image
|
||||
module.compute.openstack_compute_floatingip_associate_v2.k8s_master[0]
|
||||
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]
|
||||
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]
|
||||
module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]
|
||||
module.compute.openstack_compute_instance_v2.k8s_master[0]
|
||||
module.compute.openstack_compute_instance_v2.k8s_node[0]
|
||||
module.compute.openstack_compute_instance_v2.k8s_node[1]
|
||||
module.compute.openstack_compute_instance_v2.k8s_node[2]
|
||||
module.compute.openstack_compute_keypair_v2.k8s
|
||||
module.compute.openstack_compute_servergroup_v2.k8s_etcd[0]
|
||||
module.compute.openstack_compute_servergroup_v2.k8s_master[0]
|
||||
module.compute.openstack_compute_servergroup_v2.k8s_node[0]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.bastion[0]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.egress[0]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.k8s
|
||||
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[0]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[1]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.k8s_allowed_remote_ips[2]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.k8s_master[0]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.worker[0]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.worker[1]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.worker[2]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.worker[3]
|
||||
module.compute.openstack_networking_secgroup_rule_v2.worker[4]
|
||||
module.compute.openstack_networking_secgroup_v2.bastion[0]
|
||||
module.compute.openstack_networking_secgroup_v2.k8s
|
||||
module.compute.openstack_networking_secgroup_v2.k8s_master
|
||||
module.compute.openstack_networking_secgroup_v2.worker
|
||||
module.ips.null_resource.dummy_dependency
|
||||
module.ips.openstack_networking_floatingip_v2.k8s_master[0]
|
||||
module.ips.openstack_networking_floatingip_v2.k8s_node[0]
|
||||
module.ips.openstack_networking_floatingip_v2.k8s_node[1]
|
||||
module.ips.openstack_networking_floatingip_v2.k8s_node[2]
|
||||
module.network.openstack_networking_network_v2.k8s[0]
|
||||
module.network.openstack_networking_router_interface_v2.k8s[0]
|
||||
module.network.openstack_networking_router_v2.k8s[0]
|
||||
module.network.openstack_networking_subnet_v2.k8s[0]
|
||||
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["1"]'
|
||||
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[0]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"1\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["2"]'
|
||||
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[1]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"2\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]' 'module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes["3"]'
|
||||
Move "module.compute.openstack_compute_floatingip_associate_v2.k8s_node[2]" to "module.compute.openstack_compute_floatingip_associate_v2.k8s_nodes[\"3\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[0]' 'module.compute.openstack_compute_instance_v2.k8s_node["1"]'
|
||||
Move "module.compute.openstack_compute_instance_v2.k8s_node[0]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"1\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[1]' 'module.compute.openstack_compute_instance_v2.k8s_node["2"]'
|
||||
Move "module.compute.openstack_compute_instance_v2.k8s_node[1]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"2\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.compute.openstack_compute_instance_v2.k8s_node[2]' 'module.compute.openstack_compute_instance_v2.k8s_node["3"]'
|
||||
Move "module.compute.openstack_compute_instance_v2.k8s_node[2]" to "module.compute.openstack_compute_instance_v2.k8s_node[\"3\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[0]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["1"]'
|
||||
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[0]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"1\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[1]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["2"]'
|
||||
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[1]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"2\"]"
|
||||
Successfully moved 1 object(s).
|
||||
$ terraform state mv 'module.ips.openstack_networking_floatingip_v2.k8s_node[2]' 'module.ips.openstack_networking_floatingip_v2.k8s_node["3"]'
|
||||
Move "module.ips.openstack_networking_floatingip_v2.k8s_node[2]" to "module.ips.openstack_networking_floatingip_v2.k8s_node[\"3\"]"
|
||||
Successfully moved 1 object(s).
|
||||
```
|
||||
|
||||
Of course for nodes without floating ips those steps can be omitted.
|
||||
1
contrib/terraform/openstack/hosts
Symbolic link
1
contrib/terraform/openstack/hosts
Symbolic link
@@ -0,0 +1 @@
|
||||
../terraform.py
|
||||
130
contrib/terraform/openstack/kubespray.tf
Normal file
130
contrib/terraform/openstack/kubespray.tf
Normal file
@@ -0,0 +1,130 @@
|
||||
module "network" {
|
||||
source = "./modules/network"
|
||||
|
||||
external_net = var.external_net
|
||||
network_name = var.network_name
|
||||
subnet_cidr = var.subnet_cidr
|
||||
cluster_name = var.cluster_name
|
||||
dns_nameservers = var.dns_nameservers
|
||||
network_dns_domain = var.network_dns_domain
|
||||
use_neutron = var.use_neutron
|
||||
port_security_enabled = var.port_security_enabled
|
||||
router_id = var.router_id
|
||||
}
|
||||
|
||||
module "ips" {
|
||||
source = "./modules/ips"
|
||||
|
||||
number_of_k8s_masters = var.number_of_k8s_masters
|
||||
number_of_k8s_masters_no_etcd = var.number_of_k8s_masters_no_etcd
|
||||
number_of_k8s_nodes = var.number_of_k8s_nodes
|
||||
floatingip_pool = var.floatingip_pool
|
||||
number_of_bastions = var.number_of_bastions
|
||||
external_net = var.external_net
|
||||
network_name = var.network_name
|
||||
router_id = module.network.router_id
|
||||
k8s_nodes = var.k8s_nodes
|
||||
k8s_masters = var.k8s_masters
|
||||
k8s_master_fips = var.k8s_master_fips
|
||||
bastion_fips = var.bastion_fips
|
||||
router_internal_port_id = module.network.router_internal_port_id
|
||||
}
|
||||
|
||||
module "compute" {
|
||||
source = "./modules/compute"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
az_list = var.az_list
|
||||
az_list_node = var.az_list_node
|
||||
number_of_k8s_masters = var.number_of_k8s_masters
|
||||
number_of_k8s_masters_no_etcd = var.number_of_k8s_masters_no_etcd
|
||||
number_of_etcd = var.number_of_etcd
|
||||
number_of_k8s_masters_no_floating_ip = var.number_of_k8s_masters_no_floating_ip
|
||||
number_of_k8s_masters_no_floating_ip_no_etcd = var.number_of_k8s_masters_no_floating_ip_no_etcd
|
||||
number_of_k8s_nodes = var.number_of_k8s_nodes
|
||||
number_of_bastions = var.number_of_bastions
|
||||
number_of_k8s_nodes_no_floating_ip = var.number_of_k8s_nodes_no_floating_ip
|
||||
number_of_gfs_nodes_no_floating_ip = var.number_of_gfs_nodes_no_floating_ip
|
||||
k8s_masters = var.k8s_masters
|
||||
k8s_nodes = var.k8s_nodes
|
||||
bastion_root_volume_size_in_gb = var.bastion_root_volume_size_in_gb
|
||||
etcd_root_volume_size_in_gb = var.etcd_root_volume_size_in_gb
|
||||
master_root_volume_size_in_gb = var.master_root_volume_size_in_gb
|
||||
node_root_volume_size_in_gb = var.node_root_volume_size_in_gb
|
||||
gfs_root_volume_size_in_gb = var.gfs_root_volume_size_in_gb
|
||||
gfs_volume_size_in_gb = var.gfs_volume_size_in_gb
|
||||
master_volume_type = var.master_volume_type
|
||||
node_volume_type = var.node_volume_type
|
||||
public_key_path = var.public_key_path
|
||||
image = var.image
|
||||
image_uuid = var.image_uuid
|
||||
image_gfs = var.image_gfs
|
||||
image_master = var.image_master
|
||||
image_master_uuid = var.image_master_uuid
|
||||
image_gfs_uuid = var.image_gfs_uuid
|
||||
ssh_user = var.ssh_user
|
||||
ssh_user_gfs = var.ssh_user_gfs
|
||||
flavor_k8s_master = var.flavor_k8s_master
|
||||
flavor_k8s_node = var.flavor_k8s_node
|
||||
flavor_etcd = var.flavor_etcd
|
||||
flavor_gfs_node = var.flavor_gfs_node
|
||||
network_name = var.network_name
|
||||
flavor_bastion = var.flavor_bastion
|
||||
k8s_master_fips = module.ips.k8s_master_fips
|
||||
k8s_master_no_etcd_fips = module.ips.k8s_master_no_etcd_fips
|
||||
k8s_masters_fips = module.ips.k8s_masters_fips
|
||||
k8s_node_fips = module.ips.k8s_node_fips
|
||||
k8s_nodes_fips = module.ips.k8s_nodes_fips
|
||||
bastion_fips = module.ips.bastion_fips
|
||||
bastion_allowed_remote_ips = var.bastion_allowed_remote_ips
|
||||
master_allowed_remote_ips = var.master_allowed_remote_ips
|
||||
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
|
||||
k8s_allowed_egress_ips = var.k8s_allowed_egress_ips
|
||||
supplementary_master_groups = var.supplementary_master_groups
|
||||
supplementary_node_groups = var.supplementary_node_groups
|
||||
master_allowed_ports = var.master_allowed_ports
|
||||
worker_allowed_ports = var.worker_allowed_ports
|
||||
bastion_allowed_ports = var.bastion_allowed_ports
|
||||
use_access_ip = var.use_access_ip
|
||||
master_server_group_policy = var.master_server_group_policy
|
||||
node_server_group_policy = var.node_server_group_policy
|
||||
etcd_server_group_policy = var.etcd_server_group_policy
|
||||
extra_sec_groups = var.extra_sec_groups
|
||||
extra_sec_groups_name = var.extra_sec_groups_name
|
||||
group_vars_path = var.group_vars_path
|
||||
port_security_enabled = var.port_security_enabled
|
||||
force_null_port_security = var.force_null_port_security
|
||||
network_router_id = module.network.router_id
|
||||
network_id = module.network.network_id
|
||||
use_existing_network = var.use_existing_network
|
||||
private_subnet_id = module.network.subnet_id
|
||||
additional_server_groups = var.additional_server_groups
|
||||
|
||||
depends_on = [
|
||||
module.network.subnet_id
|
||||
]
|
||||
}
|
||||
|
||||
output "private_subnet_id" {
|
||||
value = module.network.subnet_id
|
||||
}
|
||||
|
||||
output "floating_network_id" {
|
||||
value = var.external_net
|
||||
}
|
||||
|
||||
output "router_id" {
|
||||
value = module.network.router_id
|
||||
}
|
||||
|
||||
output "k8s_master_fips" {
|
||||
value = var.number_of_k8s_masters + var.number_of_k8s_masters_no_etcd > 0 ? concat(module.ips.k8s_master_fips, module.ips.k8s_master_no_etcd_fips) : [for key, value in module.ips.k8s_masters_fips : value.address]
|
||||
}
|
||||
|
||||
output "k8s_node_fips" {
|
||||
value = var.number_of_k8s_nodes > 0 ? module.ips.k8s_node_fips : [for key, value in module.ips.k8s_nodes_fips : value.address]
|
||||
}
|
||||
|
||||
output "bastion_fips" {
|
||||
value = module.ips.bastion_fips
|
||||
}
|
||||
@@ -0,0 +1 @@
|
||||
ansible_ssh_common_args: "-o ProxyCommand='ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q USER@BASTION_ADDRESS {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}'"
|
||||
971
contrib/terraform/openstack/modules/compute/main.tf
Normal file
971
contrib/terraform/openstack/modules/compute/main.tf
Normal file
@@ -0,0 +1,971 @@
|
||||
data "openstack_images_image_v2" "vm_image" {
|
||||
count = var.image_uuid == "" ? 1 : 0
|
||||
most_recent = true
|
||||
name = var.image
|
||||
}
|
||||
|
||||
data "openstack_images_image_v2" "gfs_image" {
|
||||
count = var.image_gfs_uuid == "" ? var.image_uuid == "" ? 1 : 0 : 0
|
||||
most_recent = true
|
||||
name = var.image_gfs == "" ? var.image : var.image_gfs
|
||||
}
|
||||
|
||||
data "openstack_images_image_v2" "image_master" {
|
||||
count = var.image_master_uuid == "" ? var.image_uuid == "" ? 1 : 0 : 0
|
||||
name = var.image_master == "" ? var.image : var.image_master
|
||||
}
|
||||
|
||||
data "cloudinit_config" "cloudinit" {
|
||||
part {
|
||||
content_type = "text/cloud-config"
|
||||
content = templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
||||
extra_partitions = [],
|
||||
netplan_critical_dhcp_interface = ""
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
data "openstack_networking_network_v2" "k8s_network" {
|
||||
count = var.use_existing_network ? 1 : 0
|
||||
name = var.network_name
|
||||
}
|
||||
|
||||
resource "openstack_compute_keypair_v2" "k8s" {
|
||||
name = "kubernetes-${var.cluster_name}"
|
||||
public_key = chomp(file(var.public_key_path))
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "k8s_master" {
|
||||
name = "${var.cluster_name}-k8s-master"
|
||||
description = "${var.cluster_name} - Kubernetes Master"
|
||||
delete_default_rules = true
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "k8s_master_extra" {
|
||||
count = "%{if var.extra_sec_groups}1%{else}0%{endif}"
|
||||
name = "${var.cluster_name}-k8s-master-${var.extra_sec_groups_name}"
|
||||
description = "${var.cluster_name} - Kubernetes Master nodes - rules not managed by terraform"
|
||||
delete_default_rules = true
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_master" {
|
||||
count = length(var.master_allowed_remote_ips)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = "tcp"
|
||||
port_range_min = "6443"
|
||||
port_range_max = "6443"
|
||||
remote_ip_prefix = var.master_allowed_remote_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_master_ports" {
|
||||
count = length(var.master_allowed_ports)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = lookup(var.master_allowed_ports[count.index], "protocol", "tcp")
|
||||
port_range_min = lookup(var.master_allowed_ports[count.index], "port_range_min")
|
||||
port_range_max = lookup(var.master_allowed_ports[count.index], "port_range_max")
|
||||
remote_ip_prefix = lookup(var.master_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s_master.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "bastion" {
|
||||
name = "${var.cluster_name}-bastion"
|
||||
count = var.number_of_bastions != "" ? 1 : 0
|
||||
description = "${var.cluster_name} - Bastion Server"
|
||||
delete_default_rules = true
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "bastion" {
|
||||
count = var.number_of_bastions != "" ? length(var.bastion_allowed_remote_ips) : 0
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = "tcp"
|
||||
port_range_min = "22"
|
||||
port_range_max = "22"
|
||||
remote_ip_prefix = var.bastion_allowed_remote_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_bastion_ports" {
|
||||
count = length(var.bastion_allowed_ports)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = lookup(var.bastion_allowed_ports[count.index], "protocol", "tcp")
|
||||
port_range_min = lookup(var.bastion_allowed_ports[count.index], "port_range_min")
|
||||
port_range_max = lookup(var.bastion_allowed_ports[count.index], "port_range_max")
|
||||
remote_ip_prefix = lookup(var.bastion_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")
|
||||
security_group_id = openstack_networking_secgroup_v2.bastion[0].id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "k8s" {
|
||||
name = "${var.cluster_name}-k8s"
|
||||
description = "${var.cluster_name} - Kubernetes"
|
||||
delete_default_rules = true
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s" {
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
remote_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "k8s_allowed_remote_ips" {
|
||||
count = length(var.k8s_allowed_remote_ips)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = "tcp"
|
||||
port_range_min = "22"
|
||||
port_range_max = "22"
|
||||
remote_ip_prefix = var.k8s_allowed_remote_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "egress" {
|
||||
count = length(var.k8s_allowed_egress_ips)
|
||||
direction = "egress"
|
||||
ethertype = "IPv4"
|
||||
remote_ip_prefix = var.k8s_allowed_egress_ips[count.index]
|
||||
security_group_id = openstack_networking_secgroup_v2.k8s.id
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "worker" {
|
||||
name = "${var.cluster_name}-k8s-worker"
|
||||
description = "${var.cluster_name} - Kubernetes worker nodes"
|
||||
delete_default_rules = true
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_v2" "worker_extra" {
|
||||
count = "%{if var.extra_sec_groups}1%{else}0%{endif}"
|
||||
name = "${var.cluster_name}-k8s-worker-${var.extra_sec_groups_name}"
|
||||
description = "${var.cluster_name} - Kubernetes worker nodes - rules not managed by terraform"
|
||||
delete_default_rules = true
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "worker" {
|
||||
count = length(var.worker_allowed_ports)
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = lookup(var.worker_allowed_ports[count.index], "protocol", "tcp")
|
||||
port_range_min = lookup(var.worker_allowed_ports[count.index], "port_range_min")
|
||||
port_range_max = lookup(var.worker_allowed_ports[count.index], "port_range_max")
|
||||
remote_ip_prefix = lookup(var.worker_allowed_ports[count.index], "remote_ip_prefix", "0.0.0.0/0")
|
||||
security_group_id = openstack_networking_secgroup_v2.worker.id
|
||||
}
|
||||
|
||||
resource "openstack_compute_servergroup_v2" "k8s_master" {
|
||||
count = var.master_server_group_policy != "" ? 1 : 0
|
||||
name = "k8s-master-srvgrp"
|
||||
policies = [var.master_server_group_policy]
|
||||
}
|
||||
|
||||
resource "openstack_compute_servergroup_v2" "k8s_node" {
|
||||
count = var.node_server_group_policy != "" ? 1 : 0
|
||||
name = "k8s-node-srvgrp"
|
||||
policies = [var.node_server_group_policy]
|
||||
}
|
||||
|
||||
resource "openstack_compute_servergroup_v2" "k8s_etcd" {
|
||||
count = var.etcd_server_group_policy != "" ? 1 : 0
|
||||
name = "k8s-etcd-srvgrp"
|
||||
policies = [var.etcd_server_group_policy]
|
||||
}
|
||||
|
||||
resource "openstack_compute_servergroup_v2" "k8s_node_additional" {
|
||||
for_each = var.additional_server_groups
|
||||
name = "k8s-${each.key}-srvgrp"
|
||||
policies = [each.value.policy]
|
||||
}
|
||||
|
||||
locals {
|
||||
# master groups
|
||||
master_sec_groups = compact([
|
||||
openstack_networking_secgroup_v2.k8s_master.id,
|
||||
openstack_networking_secgroup_v2.k8s.id,
|
||||
var.extra_sec_groups ?openstack_networking_secgroup_v2.k8s_master_extra[0].id : "",
|
||||
])
|
||||
# worker groups
|
||||
worker_sec_groups = compact([
|
||||
openstack_networking_secgroup_v2.k8s.id,
|
||||
openstack_networking_secgroup_v2.worker.id,
|
||||
var.extra_sec_groups ? openstack_networking_secgroup_v2.worker_extra[0].id : "",
|
||||
])
|
||||
# bastion groups
|
||||
bastion_sec_groups = compact(concat([
|
||||
openstack_networking_secgroup_v2.k8s.id,
|
||||
openstack_networking_secgroup_v2.bastion[0].id,
|
||||
]))
|
||||
# etcd groups
|
||||
etcd_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
|
||||
# glusterfs groups
|
||||
gfs_sec_groups = compact([openstack_networking_secgroup_v2.k8s.id])
|
||||
|
||||
# Image uuid
|
||||
image_to_use_node = var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.vm_image[0].id
|
||||
# Image_gfs uuid
|
||||
image_to_use_gfs = var.image_gfs_uuid != "" ? var.image_gfs_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.gfs_image[0].id
|
||||
# image_master uuidimage_gfs_uuid
|
||||
image_to_use_master = var.image_master_uuid != "" ? var.image_master_uuid : var.image_uuid != "" ? var.image_uuid : data.openstack_images_image_v2.image_master[0].id
|
||||
|
||||
k8s_nodes_settings = {
|
||||
for name, node in var.k8s_nodes :
|
||||
name => {
|
||||
"use_local_disk" = (node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.node_root_volume_size_in_gb) == 0,
|
||||
"image_id" = node.image_id != null ? node.image_id : local.image_to_use_node,
|
||||
"volume_size" = node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.node_root_volume_size_in_gb,
|
||||
"volume_type" = node.volume_type != null ? node.volume_type : var.node_volume_type,
|
||||
"network_id" = node.network_id != null ? node.network_id : (var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id)
|
||||
"server_group" = node.server_group != null ? [openstack_compute_servergroup_v2.k8s_node_additional[node.server_group].id] : (var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0].id] : [])
|
||||
}
|
||||
}
|
||||
|
||||
k8s_masters_settings = {
|
||||
for name, node in var.k8s_masters :
|
||||
name => {
|
||||
"use_local_disk" = (node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.master_root_volume_size_in_gb) == 0,
|
||||
"image_id" = node.image_id != null ? node.image_id : local.image_to_use_master,
|
||||
"volume_size" = node.root_volume_size_in_gb != null ? node.root_volume_size_in_gb : var.master_root_volume_size_in_gb,
|
||||
"volume_type" = node.volume_type != null ? node.volume_type : var.master_volume_type,
|
||||
"network_id" = node.network_id != null ? node.network_id : (var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "bastion_port" {
|
||||
count = var.number_of_bastions
|
||||
name = "${var.cluster_name}-bastion-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.bastion_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "bastion" {
|
||||
name = "${var.cluster_name}-bastion-${count.index + 1}"
|
||||
count = var.number_of_bastions
|
||||
image_id = var.bastion_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
|
||||
flavor_id = var.flavor_bastion
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.bastion_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
|
||||
content {
|
||||
uuid = local.image_to_use_node
|
||||
source_type = "image"
|
||||
volume_size = var.bastion_root_volume_size_in_gb
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.bastion_port.*.id, count.index)
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "bastion"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
|
||||
provisioner "local-exec" {
|
||||
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${var.bastion_fips[0]}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_master_port" {
|
||||
count = var.number_of_k8s_masters
|
||||
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_master" {
|
||||
name = "${var.cluster_name}-k8s-master-${count.index + 1}"
|
||||
count = var.number_of_k8s_masters
|
||||
availability_zone = element(var.az_list, count.index)
|
||||
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
|
||||
flavor_id = var.flavor_k8s_master
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
|
||||
content {
|
||||
uuid = local.image_to_use_master
|
||||
source_type = "image"
|
||||
volume_size = var.master_root_volume_size_in_gb
|
||||
volume_type = var.master_volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index)
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_master[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
|
||||
provisioner "local-exec" {
|
||||
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_masters_port" {
|
||||
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
|
||||
name = "${var.cluster_name}-k8s-${each.key}"
|
||||
network_id = local.k8s_masters_settings[each.key].network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_masters" {
|
||||
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? var.k8s_masters : {}
|
||||
name = "${var.cluster_name}-k8s-${each.key}"
|
||||
availability_zone = each.value.az
|
||||
image_id = local.k8s_masters_settings[each.key].use_local_disk ? local.k8s_masters_settings[each.key].image_id : null
|
||||
flavor_id = each.value.flavor
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = !local.k8s_masters_settings[each.key].use_local_disk ? [local.k8s_masters_settings[each.key].image_id] : []
|
||||
content {
|
||||
uuid = block_device.value
|
||||
source_type = "image"
|
||||
volume_size = local.k8s_masters_settings[each.key].volume_size
|
||||
volume_type = local.k8s_masters_settings[each.key].volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = openstack_networking_port_v2.k8s_masters_port[each.key].id
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_master[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "%{if each.value.etcd == true}etcd,%{endif}kube_control_plane,${var.supplementary_master_groups},k8s_cluster%{if each.value.floating_ip == false},no_floating%{endif}"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
|
||||
provisioner "local-exec" {
|
||||
command = "%{if each.value.floating_ip}sed s/USER/${var.ssh_user}/ ${path.module}/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_masters_fips : value.address]), 0)}/ > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_master_no_etcd_port" {
|
||||
count = var.number_of_k8s_masters_no_etcd
|
||||
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_master_no_etcd" {
|
||||
name = "${var.cluster_name}-k8s-master-ne-${count.index + 1}"
|
||||
count = var.number_of_k8s_masters_no_etcd
|
||||
availability_zone = element(var.az_list, count.index)
|
||||
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
|
||||
flavor_id = var.flavor_k8s_master
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
|
||||
content {
|
||||
uuid = local.image_to_use_master
|
||||
source_type = "image"
|
||||
volume_size = var.master_root_volume_size_in_gb
|
||||
volume_type = var.master_volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index)
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_master[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
|
||||
provisioner "local-exec" {
|
||||
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_master_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "etcd_port" {
|
||||
count = var.number_of_etcd
|
||||
name = "${var.cluster_name}-etcd-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.etcd_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "etcd" {
|
||||
name = "${var.cluster_name}-etcd-${count.index + 1}"
|
||||
count = var.number_of_etcd
|
||||
availability_zone = element(var.az_list, count.index)
|
||||
image_id = var.etcd_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
|
||||
flavor_id = var.flavor_etcd
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.etcd_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
|
||||
content {
|
||||
uuid = local.image_to_use_master
|
||||
source_type = "image"
|
||||
volume_size = var.etcd_root_volume_size_in_gb
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.etcd_port.*.id, count.index)
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.etcd_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_etcd[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_etcd[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "etcd,no_floating"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_port" {
|
||||
count = var.number_of_k8s_masters_no_floating_ip
|
||||
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip" {
|
||||
name = "${var.cluster_name}-k8s-master-nf-${count.index + 1}"
|
||||
count = var.number_of_k8s_masters_no_floating_ip
|
||||
availability_zone = element(var.az_list, count.index)
|
||||
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
|
||||
flavor_id = var.flavor_k8s_master
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
|
||||
content {
|
||||
uuid = local.image_to_use_master
|
||||
source_type = "image"
|
||||
volume_size = var.master_root_volume_size_in_gb
|
||||
volume_type = var.master_volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_port.*.id, count.index)
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_master[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "etcd,kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_master_no_floating_ip_no_etcd_port" {
|
||||
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
|
||||
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.master_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_master_no_floating_ip_no_etcd" {
|
||||
name = "${var.cluster_name}-k8s-master-ne-nf-${count.index + 1}"
|
||||
count = var.number_of_k8s_masters_no_floating_ip_no_etcd
|
||||
availability_zone = element(var.az_list, count.index)
|
||||
image_id = var.master_root_volume_size_in_gb == 0 ? local.image_to_use_master : null
|
||||
flavor_id = var.flavor_k8s_master
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.master_root_volume_size_in_gb > 0 ? [local.image_to_use_master] : []
|
||||
content {
|
||||
uuid = local.image_to_use_master
|
||||
source_type = "image"
|
||||
volume_size = var.master_root_volume_size_in_gb
|
||||
volume_type = var.master_volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.k8s_master_no_floating_ip_no_etcd_port.*.id, count.index)
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.master_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_master[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_master[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "kube_control_plane,${var.supplementary_master_groups},k8s_cluster,no_floating"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_node_port" {
|
||||
count = var.number_of_k8s_nodes
|
||||
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_node" {
|
||||
name = "${var.cluster_name}-k8s-node-${count.index + 1}"
|
||||
count = var.number_of_k8s_nodes
|
||||
availability_zone = element(var.az_list_node, count.index)
|
||||
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
|
||||
flavor_id = var.flavor_k8s_node
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
|
||||
content {
|
||||
uuid = local.image_to_use_node
|
||||
source_type = "image"
|
||||
volume_size = var.node_root_volume_size_in_gb
|
||||
volume_type = var.node_volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index)
|
||||
}
|
||||
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_node[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "kube_node,k8s_cluster,${var.supplementary_node_groups}"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
|
||||
provisioner "local-exec" {
|
||||
command = "sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, var.k8s_node_fips), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_node_no_floating_ip_port" {
|
||||
count = var.number_of_k8s_nodes_no_floating_ip
|
||||
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
|
||||
name = "${var.cluster_name}-k8s-node-nf-${count.index + 1}"
|
||||
count = var.number_of_k8s_nodes_no_floating_ip
|
||||
availability_zone = element(var.az_list_node, count.index)
|
||||
image_id = var.node_root_volume_size_in_gb == 0 ? local.image_to_use_node : null
|
||||
flavor_id = var.flavor_k8s_node
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.node_root_volume_size_in_gb > 0 ? [local.image_to_use_node] : []
|
||||
content {
|
||||
uuid = local.image_to_use_node
|
||||
source_type = "image"
|
||||
volume_size = var.node_root_volume_size_in_gb
|
||||
volume_type = var.node_volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.k8s_node_no_floating_ip_port.*.id, count.index)
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0].id] : []
|
||||
content {
|
||||
group = scheduler_hints.value
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "kube_node,k8s_cluster,no_floating,${var.supplementary_node_groups}"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "k8s_nodes_port" {
|
||||
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
|
||||
name = "${var.cluster_name}-k8s-node-${each.key}"
|
||||
network_id = local.k8s_nodes_settings[each.key].network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.worker_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "k8s_nodes" {
|
||||
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? var.k8s_nodes : {}
|
||||
name = "${var.cluster_name}-k8s-node-${each.key}"
|
||||
availability_zone = each.value.az
|
||||
image_id = local.k8s_nodes_settings[each.key].use_local_disk ? local.k8s_nodes_settings[each.key].image_id : null
|
||||
flavor_id = each.value.flavor
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
user_data = each.value.cloudinit != null ? templatefile("${path.module}/templates/cloudinit.yaml.tmpl", {
|
||||
extra_partitions = each.value.cloudinit.extra_partitions,
|
||||
netplan_critical_dhcp_interface = each.value.cloudinit.netplan_critical_dhcp_interface,
|
||||
}) : data.cloudinit_config.cloudinit.rendered
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = !local.k8s_nodes_settings[each.key].use_local_disk ? [local.k8s_nodes_settings[each.key].image_id] : []
|
||||
content {
|
||||
uuid = block_device.value
|
||||
source_type = "image"
|
||||
volume_size = local.k8s_nodes_settings[each.key].volume_size
|
||||
volume_type = local.k8s_nodes_settings[each.key].volume_type
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = openstack_networking_port_v2.k8s_nodes_port[each.key].id
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = local.k8s_nodes_settings[each.key].server_group
|
||||
content {
|
||||
group = scheduler_hints.value
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user
|
||||
kubespray_groups = "kube_node,k8s_cluster,%{if each.value.floating_ip == false}no_floating,%{endif}${var.supplementary_node_groups}${each.value.extra_groups != null ? ",${each.value.extra_groups}" : ""}"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
|
||||
provisioner "local-exec" {
|
||||
command = "%{if each.value.floating_ip}sed -e s/USER/${var.ssh_user}/ -e s/BASTION_ADDRESS/${element(concat(var.bastion_fips, [for key, value in var.k8s_nodes_fips : value.address]), 0)}/ ${path.module}/ansible_bastion_template.txt > ${var.group_vars_path}/no_floating.yml%{else}true%{endif}"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_port_v2" "glusterfs_node_no_floating_ip_port" {
|
||||
count = var.number_of_gfs_nodes_no_floating_ip
|
||||
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
|
||||
network_id = var.use_existing_network ? data.openstack_networking_network_v2.k8s_network[0].id : var.network_id
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.force_null_port_security ? null : var.port_security_enabled
|
||||
security_group_ids = var.port_security_enabled ? local.gfs_sec_groups : null
|
||||
no_security_groups = var.port_security_enabled ? null : false
|
||||
dynamic "fixed_ip" {
|
||||
for_each = var.private_subnet_id == "" ? [] : [true]
|
||||
content {
|
||||
subnet_id = var.private_subnet_id
|
||||
}
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
var.network_router_id
|
||||
]
|
||||
}
|
||||
|
||||
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
|
||||
name = "${var.cluster_name}-gfs-node-nf-${count.index + 1}"
|
||||
count = var.number_of_gfs_nodes_no_floating_ip
|
||||
availability_zone = element(var.az_list, count.index)
|
||||
image_name = var.gfs_root_volume_size_in_gb == 0 ? local.image_to_use_gfs : null
|
||||
flavor_id = var.flavor_gfs_node
|
||||
key_pair = openstack_compute_keypair_v2.k8s.name
|
||||
|
||||
dynamic "block_device" {
|
||||
for_each = var.gfs_root_volume_size_in_gb > 0 ? [local.image_to_use_gfs] : []
|
||||
content {
|
||||
uuid = local.image_to_use_gfs
|
||||
source_type = "image"
|
||||
volume_size = var.gfs_root_volume_size_in_gb
|
||||
boot_index = 0
|
||||
destination_type = "volume"
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
network {
|
||||
port = element(openstack_networking_port_v2.glusterfs_node_no_floating_ip_port.*.id, count.index)
|
||||
}
|
||||
|
||||
dynamic "scheduler_hints" {
|
||||
for_each = var.node_server_group_policy != "" ? [openstack_compute_servergroup_v2.k8s_node[0]] : []
|
||||
content {
|
||||
group = openstack_compute_servergroup_v2.k8s_node[0].id
|
||||
}
|
||||
}
|
||||
|
||||
metadata = {
|
||||
ssh_user = var.ssh_user_gfs
|
||||
kubespray_groups = "gfs-cluster,network-storage,no_floating"
|
||||
depends_on = var.network_router_id
|
||||
use_access_ip = var.use_access_ip
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_associate_v2" "bastion" {
|
||||
count = var.number_of_bastions
|
||||
floating_ip = var.bastion_fips[count.index]
|
||||
port_id = element(openstack_networking_port_v2.bastion_port.*.id, count.index)
|
||||
}
|
||||
|
||||
|
||||
resource "openstack_networking_floatingip_associate_v2" "k8s_master" {
|
||||
count = var.number_of_k8s_masters
|
||||
floating_ip = var.k8s_master_fips[count.index]
|
||||
port_id = element(openstack_networking_port_v2.k8s_master_port.*.id, count.index)
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_associate_v2" "k8s_masters" {
|
||||
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 && var.number_of_k8s_masters_no_floating_ip == 0 && var.number_of_k8s_masters_no_floating_ip_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {}
|
||||
floating_ip = var.k8s_masters_fips[each.key].address
|
||||
port_id = openstack_networking_port_v2.k8s_masters_port[each.key].id
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_associate_v2" "k8s_master_no_etcd" {
|
||||
count = var.master_root_volume_size_in_gb == 0 ? var.number_of_k8s_masters_no_etcd : 0
|
||||
floating_ip = var.k8s_master_no_etcd_fips[count.index]
|
||||
port_id = element(openstack_networking_port_v2.k8s_master_no_etcd_port.*.id, count.index)
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_associate_v2" "k8s_node" {
|
||||
count = var.node_root_volume_size_in_gb == 0 ? var.number_of_k8s_nodes : 0
|
||||
floating_ip = var.k8s_node_fips[count.index]
|
||||
port_id = element(openstack_networking_port_v2.k8s_node_port.*.id, count.index)
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_associate_v2" "k8s_nodes" {
|
||||
for_each = var.number_of_k8s_nodes == 0 && var.number_of_k8s_nodes_no_floating_ip == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
|
||||
floating_ip = var.k8s_nodes_fips[each.key].address
|
||||
port_id = openstack_networking_port_v2.k8s_nodes_port[each.key].id
|
||||
}
|
||||
|
||||
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
|
||||
name = "${var.cluster_name}-glusterfs_volume-${count.index + 1}"
|
||||
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
|
||||
description = "Non-ephemeral volume for GlusterFS"
|
||||
size = var.gfs_volume_size_in_gb
|
||||
}
|
||||
|
||||
resource "openstack_compute_volume_attach_v2" "glusterfs_volume" {
|
||||
count = var.gfs_root_volume_size_in_gb == 0 ? var.number_of_gfs_nodes_no_floating_ip : 0
|
||||
instance_id = element(openstack_compute_instance_v2.glusterfs_node_no_floating_ip.*.id, count.index)
|
||||
volume_id = element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)
|
||||
}
|
||||
@@ -0,0 +1,54 @@
|
||||
%{~ if length(extra_partitions) > 0 || netplan_critical_dhcp_interface != "" }
|
||||
#cloud-config
|
||||
bootcmd:
|
||||
%{~ for idx, partition in extra_partitions }
|
||||
- [ cloud-init-per, once, move-second-header, sgdisk, --move-second-header, ${partition.volume_path} ]
|
||||
- [ cloud-init-per, once, create-part-${idx}, parted, --script, ${partition.volume_path}, 'mkpart extended ext4 ${partition.partition_start} ${partition.partition_end}' ]
|
||||
- [ cloud-init-per, once, create-fs-part-${idx}, mkfs.ext4, ${partition.partition_path} ]
|
||||
%{~ endfor }
|
||||
|
||||
runcmd:
|
||||
%{~ if netplan_critical_dhcp_interface != "" }
|
||||
- netplan apply
|
||||
%{~ endif }
|
||||
%{~ for idx, partition in extra_partitions }
|
||||
- mkdir -p ${partition.mount_path}
|
||||
- chown nobody:nogroup ${partition.mount_path}
|
||||
- mount ${partition.partition_path} ${partition.mount_path}
|
||||
%{~ endfor ~}
|
||||
|
||||
%{~ if netplan_critical_dhcp_interface != "" }
|
||||
write_files:
|
||||
- path: /etc/netplan/90-critical-dhcp.yaml
|
||||
content: |
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
${ netplan_critical_dhcp_interface }:
|
||||
dhcp4: true
|
||||
critical: true
|
||||
%{~ endif }
|
||||
|
||||
mounts:
|
||||
%{~ for idx, partition in extra_partitions }
|
||||
- [ ${partition.partition_path}, ${partition.mount_path} ]
|
||||
%{~ endfor }
|
||||
%{~ else ~}
|
||||
# yamllint disable rule:comments
|
||||
#cloud-config
|
||||
## in some cases novnc console access is required
|
||||
## it requires ssh password to be set
|
||||
#ssh_pwauth: yes
|
||||
#chpasswd:
|
||||
# list: |
|
||||
# root:secret
|
||||
# expire: False
|
||||
|
||||
## in some cases direct root ssh access via ssh key is required
|
||||
#disable_root: false
|
||||
|
||||
## in some cases additional CA certs are required
|
||||
#ca-certs:
|
||||
# trusted: |
|
||||
# -----BEGIN CERTIFICATE-----
|
||||
%{~ endif }
|
||||
235
contrib/terraform/openstack/modules/compute/variables.tf
Normal file
235
contrib/terraform/openstack/modules/compute/variables.tf
Normal file
@@ -0,0 +1,235 @@
|
||||
variable "cluster_name" {}
|
||||
|
||||
variable "az_list" {
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "az_list_node" {
|
||||
type = list(string)
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters" {}
|
||||
|
||||
variable "number_of_k8s_masters_no_etcd" {}
|
||||
|
||||
variable "number_of_etcd" {}
|
||||
|
||||
variable "number_of_k8s_masters_no_floating_ip" {}
|
||||
|
||||
variable "number_of_k8s_masters_no_floating_ip_no_etcd" {}
|
||||
|
||||
variable "number_of_k8s_nodes" {}
|
||||
|
||||
variable "number_of_k8s_nodes_no_floating_ip" {}
|
||||
|
||||
variable "number_of_bastions" {}
|
||||
|
||||
variable "number_of_gfs_nodes_no_floating_ip" {}
|
||||
|
||||
variable "bastion_root_volume_size_in_gb" {}
|
||||
|
||||
variable "etcd_root_volume_size_in_gb" {}
|
||||
|
||||
variable "master_root_volume_size_in_gb" {}
|
||||
|
||||
variable "node_root_volume_size_in_gb" {}
|
||||
|
||||
variable "gfs_root_volume_size_in_gb" {}
|
||||
|
||||
variable "gfs_volume_size_in_gb" {}
|
||||
|
||||
variable "master_volume_type" {}
|
||||
|
||||
variable "node_volume_type" {}
|
||||
|
||||
variable "public_key_path" {}
|
||||
|
||||
variable "image" {}
|
||||
|
||||
variable "image_gfs" {}
|
||||
|
||||
variable "ssh_user" {}
|
||||
|
||||
variable "ssh_user_gfs" {}
|
||||
|
||||
variable "flavor_k8s_master" {}
|
||||
|
||||
variable "flavor_k8s_node" {}
|
||||
|
||||
variable "flavor_etcd" {}
|
||||
|
||||
variable "flavor_gfs_node" {}
|
||||
|
||||
variable "network_name" {}
|
||||
|
||||
variable "flavor_bastion" {}
|
||||
|
||||
variable "network_id" {
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "use_existing_network" {
|
||||
type = bool
|
||||
}
|
||||
|
||||
variable "network_router_id" {
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "k8s_master_fips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_master_no_etcd_fips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_node_fips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_masters_fips" {
|
||||
type = map
|
||||
}
|
||||
|
||||
variable "k8s_nodes_fips" {
|
||||
type = map
|
||||
}
|
||||
|
||||
variable "bastion_fips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "bastion_allowed_remote_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "master_allowed_remote_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_allowed_remote_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_allowed_egress_ips" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "k8s_masters" {
|
||||
type = map(object({
|
||||
az = string
|
||||
flavor = string
|
||||
floating_ip = bool
|
||||
etcd = bool
|
||||
image_id = optional(string)
|
||||
root_volume_size_in_gb = optional(number)
|
||||
volume_type = optional(string)
|
||||
network_id = optional(string)
|
||||
}))
|
||||
}
|
||||
|
||||
variable "k8s_nodes" {
|
||||
type = map(object({
|
||||
az = string
|
||||
flavor = string
|
||||
floating_ip = bool
|
||||
extra_groups = optional(string)
|
||||
image_id = optional(string)
|
||||
root_volume_size_in_gb = optional(number)
|
||||
volume_type = optional(string)
|
||||
network_id = optional(string)
|
||||
additional_server_groups = optional(list(string))
|
||||
server_group = optional(string)
|
||||
cloudinit = optional(object({
|
||||
extra_partitions = optional(list(object({
|
||||
volume_path = string
|
||||
partition_path = string
|
||||
partition_start = string
|
||||
partition_end = string
|
||||
mount_path = string
|
||||
})), [])
|
||||
netplan_critical_dhcp_interface = optional(string, "")
|
||||
}))
|
||||
}))
|
||||
}
|
||||
|
||||
variable "additional_server_groups" {
|
||||
type = map(object({
|
||||
policy = string
|
||||
}))
|
||||
}
|
||||
|
||||
variable "supplementary_master_groups" {
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "supplementary_node_groups" {
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "master_allowed_ports" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "worker_allowed_ports" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "bastion_allowed_ports" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "use_access_ip" {}
|
||||
|
||||
variable "master_server_group_policy" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "node_server_group_policy" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "etcd_server_group_policy" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "extra_sec_groups" {
|
||||
type = bool
|
||||
}
|
||||
|
||||
variable "extra_sec_groups_name" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "image_uuid" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "image_gfs_uuid" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "image_master" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "image_master_uuid" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "group_vars_path" {
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "port_security_enabled" {
|
||||
type = bool
|
||||
}
|
||||
|
||||
variable "force_null_port_security" {
|
||||
type = bool
|
||||
}
|
||||
|
||||
variable "private_subnet_id" {
|
||||
type = string
|
||||
}
|
||||
8
contrib/terraform/openstack/modules/compute/versions.tf
Normal file
8
contrib/terraform/openstack/modules/compute/versions.tf
Normal file
@@ -0,0 +1,8 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
}
|
||||
}
|
||||
required_version = ">= 1.3.0"
|
||||
}
|
||||
46
contrib/terraform/openstack/modules/ips/main.tf
Normal file
46
contrib/terraform/openstack/modules/ips/main.tf
Normal file
@@ -0,0 +1,46 @@
|
||||
resource "null_resource" "dummy_dependency" {
|
||||
triggers = {
|
||||
dependency_id = var.router_id
|
||||
}
|
||||
depends_on = [
|
||||
var.router_internal_port_id
|
||||
]
|
||||
}
|
||||
|
||||
# If user specifies pre-existing IPs to use in k8s_master_fips, do not create new ones.
|
||||
resource "openstack_networking_floatingip_v2" "k8s_master" {
|
||||
count = length(var.k8s_master_fips) > 0 ? 0 : var.number_of_k8s_masters
|
||||
pool = var.floatingip_pool
|
||||
depends_on = [null_resource.dummy_dependency]
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_v2" "k8s_masters" {
|
||||
for_each = var.number_of_k8s_masters == 0 && var.number_of_k8s_masters_no_etcd == 0 ? { for key, value in var.k8s_masters : key => value if value.floating_ip } : {}
|
||||
pool = var.floatingip_pool
|
||||
depends_on = [null_resource.dummy_dependency]
|
||||
}
|
||||
|
||||
# If user specifies pre-existing IPs to use in k8s_master_fips, do not create new ones.
|
||||
resource "openstack_networking_floatingip_v2" "k8s_master_no_etcd" {
|
||||
count = length(var.k8s_master_fips) > 0 ? 0 : var.number_of_k8s_masters_no_etcd
|
||||
pool = var.floatingip_pool
|
||||
depends_on = [null_resource.dummy_dependency]
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_v2" "k8s_node" {
|
||||
count = var.number_of_k8s_nodes
|
||||
pool = var.floatingip_pool
|
||||
depends_on = [null_resource.dummy_dependency]
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_v2" "bastion" {
|
||||
count = length(var.bastion_fips) > 0 ? 0 : var.number_of_bastions
|
||||
pool = var.floatingip_pool
|
||||
depends_on = [null_resource.dummy_dependency]
|
||||
}
|
||||
|
||||
resource "openstack_networking_floatingip_v2" "k8s_nodes" {
|
||||
for_each = var.number_of_k8s_nodes == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
|
||||
pool = var.floatingip_pool
|
||||
depends_on = [null_resource.dummy_dependency]
|
||||
}
|
||||
25
contrib/terraform/openstack/modules/ips/outputs.tf
Normal file
25
contrib/terraform/openstack/modules/ips/outputs.tf
Normal file
@@ -0,0 +1,25 @@
|
||||
# If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created.
|
||||
output "k8s_master_fips" {
|
||||
value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master[*].address
|
||||
}
|
||||
|
||||
output "k8s_masters_fips" {
|
||||
value = openstack_networking_floatingip_v2.k8s_masters
|
||||
}
|
||||
|
||||
# If k8s_master_fips is already defined as input, keep the same value since new FIPs have not been created.
|
||||
output "k8s_master_no_etcd_fips" {
|
||||
value = length(var.k8s_master_fips) > 0 ? var.k8s_master_fips : openstack_networking_floatingip_v2.k8s_master_no_etcd[*].address
|
||||
}
|
||||
|
||||
output "k8s_node_fips" {
|
||||
value = openstack_networking_floatingip_v2.k8s_node[*].address
|
||||
}
|
||||
|
||||
output "k8s_nodes_fips" {
|
||||
value = openstack_networking_floatingip_v2.k8s_nodes
|
||||
}
|
||||
|
||||
output "bastion_fips" {
|
||||
value = length(var.bastion_fips) > 0 ? var.bastion_fips : openstack_networking_floatingip_v2.bastion[*].address
|
||||
}
|
||||
27
contrib/terraform/openstack/modules/ips/variables.tf
Normal file
27
contrib/terraform/openstack/modules/ips/variables.tf
Normal file
@@ -0,0 +1,27 @@
|
||||
variable "number_of_k8s_masters" {}
|
||||
|
||||
variable "number_of_k8s_masters_no_etcd" {}
|
||||
|
||||
variable "number_of_k8s_nodes" {}
|
||||
|
||||
variable "floatingip_pool" {}
|
||||
|
||||
variable "number_of_bastions" {}
|
||||
|
||||
variable "external_net" {}
|
||||
|
||||
variable "network_name" {}
|
||||
|
||||
variable "router_id" {
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "k8s_masters" {}
|
||||
|
||||
variable "k8s_nodes" {}
|
||||
|
||||
variable "k8s_master_fips" {}
|
||||
|
||||
variable "bastion_fips" {}
|
||||
|
||||
variable "router_internal_port_id" {}
|
||||
11
contrib/terraform/openstack/modules/ips/versions.tf
Normal file
11
contrib/terraform/openstack/modules/ips/versions.tf
Normal file
@@ -0,0 +1,11 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
null = {
|
||||
source = "hashicorp/null"
|
||||
}
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
}
|
||||
}
|
||||
required_version = ">= 0.12.26"
|
||||
}
|
||||
34
contrib/terraform/openstack/modules/network/main.tf
Normal file
34
contrib/terraform/openstack/modules/network/main.tf
Normal file
@@ -0,0 +1,34 @@
|
||||
resource "openstack_networking_router_v2" "k8s" {
|
||||
name = "${var.cluster_name}-router"
|
||||
count = var.use_neutron == 1 && var.router_id == null ? 1 : 0
|
||||
admin_state_up = "true"
|
||||
external_network_id = var.external_net
|
||||
}
|
||||
|
||||
data "openstack_networking_router_v2" "k8s" {
|
||||
router_id = var.router_id
|
||||
count = var.use_neutron == 1 && var.router_id != null ? 1 : 0
|
||||
}
|
||||
|
||||
resource "openstack_networking_network_v2" "k8s" {
|
||||
name = var.network_name
|
||||
count = var.use_neutron
|
||||
dns_domain = var.network_dns_domain != null ? var.network_dns_domain : null
|
||||
admin_state_up = "true"
|
||||
port_security_enabled = var.port_security_enabled
|
||||
}
|
||||
|
||||
resource "openstack_networking_subnet_v2" "k8s" {
|
||||
name = "${var.cluster_name}-internal-network"
|
||||
count = var.use_neutron
|
||||
network_id = openstack_networking_network_v2.k8s[count.index].id
|
||||
cidr = var.subnet_cidr
|
||||
ip_version = 4
|
||||
dns_nameservers = var.dns_nameservers
|
||||
}
|
||||
|
||||
resource "openstack_networking_router_interface_v2" "k8s" {
|
||||
count = var.use_neutron
|
||||
router_id = "%{if openstack_networking_router_v2.k8s != []}${openstack_networking_router_v2.k8s[count.index].id}%{else}${var.router_id}%{endif}"
|
||||
subnet_id = openstack_networking_subnet_v2.k8s[count.index].id
|
||||
}
|
||||
15
contrib/terraform/openstack/modules/network/outputs.tf
Normal file
15
contrib/terraform/openstack/modules/network/outputs.tf
Normal file
@@ -0,0 +1,15 @@
|
||||
output "router_id" {
|
||||
value = "%{if var.use_neutron == 1} ${var.router_id == null ? element(concat(openstack_networking_router_v2.k8s.*.id, [""]), 0) : var.router_id} %{else} %{endif}"
|
||||
}
|
||||
|
||||
output "network_id" {
|
||||
value = element(concat(openstack_networking_network_v2.k8s.*.id, [""]),0)
|
||||
}
|
||||
|
||||
output "router_internal_port_id" {
|
||||
value = element(concat(openstack_networking_router_interface_v2.k8s.*.id, [""]), 0)
|
||||
}
|
||||
|
||||
output "subnet_id" {
|
||||
value = element(concat(openstack_networking_subnet_v2.k8s.*.id, [""]), 0)
|
||||
}
|
||||
21
contrib/terraform/openstack/modules/network/variables.tf
Normal file
21
contrib/terraform/openstack/modules/network/variables.tf
Normal file
@@ -0,0 +1,21 @@
|
||||
variable "external_net" {}
|
||||
|
||||
variable "network_name" {}
|
||||
|
||||
variable "network_dns_domain" {}
|
||||
|
||||
variable "cluster_name" {}
|
||||
|
||||
variable "dns_nameservers" {
|
||||
type = list
|
||||
}
|
||||
|
||||
variable "port_security_enabled" {
|
||||
type = bool
|
||||
}
|
||||
|
||||
variable "subnet_cidr" {}
|
||||
|
||||
variable "use_neutron" {}
|
||||
|
||||
variable "router_id" {}
|
||||
8
contrib/terraform/openstack/modules/network/versions.tf
Normal file
8
contrib/terraform/openstack/modules/network/versions.tf
Normal file
@@ -0,0 +1,8 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
}
|
||||
}
|
||||
required_version = ">= 0.12.26"
|
||||
}
|
||||
89
contrib/terraform/openstack/sample-inventory/cluster.tfvars
Normal file
89
contrib/terraform/openstack/sample-inventory/cluster.tfvars
Normal file
@@ -0,0 +1,89 @@
|
||||
# your Kubernetes cluster name here
|
||||
cluster_name = "i-didnt-read-the-docs"
|
||||
|
||||
# list of availability zones available in your OpenStack cluster
|
||||
#az_list = ["nova"]
|
||||
|
||||
# SSH key to use for access to nodes
|
||||
public_key_path = "~/.ssh/id_rsa.pub"
|
||||
|
||||
# image to use for bastion, masters, standalone etcd instances, and nodes
|
||||
image = "<image name>"
|
||||
|
||||
# user on the node (ex. core on Container Linux, ubuntu on Ubuntu, etc.)
|
||||
ssh_user = "<cloud-provisioned user>"
|
||||
|
||||
# 0|1 bastion nodes
|
||||
number_of_bastions = 0
|
||||
|
||||
#flavor_bastion = "<UUID>"
|
||||
|
||||
# standalone etcds
|
||||
number_of_etcd = 0
|
||||
|
||||
# masters
|
||||
number_of_k8s_masters = 1
|
||||
|
||||
number_of_k8s_masters_no_etcd = 0
|
||||
|
||||
number_of_k8s_masters_no_floating_ip = 0
|
||||
|
||||
number_of_k8s_masters_no_floating_ip_no_etcd = 0
|
||||
|
||||
flavor_k8s_master = "<UUID>"
|
||||
|
||||
k8s_masters = {
|
||||
# "master-1" = {
|
||||
# "az" = "nova"
|
||||
# "flavor" = "<UUID>"
|
||||
# "floating_ip" = true
|
||||
# "etcd" = true
|
||||
# },
|
||||
# "master-2" = {
|
||||
# "az" = "nova"
|
||||
# "flavor" = "<UUID>"
|
||||
# "floating_ip" = false
|
||||
# "etcd" = true
|
||||
# },
|
||||
# "master-3" = {
|
||||
# "az" = "nova"
|
||||
# "flavor" = "<UUID>"
|
||||
# "floating_ip" = true
|
||||
# "etcd" = true
|
||||
# },
|
||||
}
|
||||
|
||||
|
||||
# nodes
|
||||
number_of_k8s_nodes = 2
|
||||
|
||||
number_of_k8s_nodes_no_floating_ip = 4
|
||||
|
||||
#flavor_k8s_node = "<UUID>"
|
||||
|
||||
# GlusterFS
|
||||
# either 0 or more than one
|
||||
#number_of_gfs_nodes_no_floating_ip = 0
|
||||
#gfs_volume_size_in_gb = 150
|
||||
# Container Linux does not support GlusterFS
|
||||
#image_gfs = "<image name>"
|
||||
# May be different from other nodes
|
||||
#ssh_user_gfs = "ubuntu"
|
||||
#flavor_gfs_node = "<UUID>"
|
||||
|
||||
# networking
|
||||
network_name = "<network>"
|
||||
|
||||
# Use a existing network with the name of network_name. Set to false to create a network with name of network_name.
|
||||
# use_existing_network = true
|
||||
|
||||
external_net = "<UUID>"
|
||||
|
||||
subnet_cidr = "<cidr>"
|
||||
|
||||
floatingip_pool = "<pool>"
|
||||
|
||||
bastion_allowed_remote_ips = ["0.0.0.0/0"]
|
||||
|
||||
# Force port security to be null. Some cloud providers do not allow to set port security.
|
||||
# force_null_port_security = false
|
||||
1
contrib/terraform/openstack/sample-inventory/group_vars
Symbolic link
1
contrib/terraform/openstack/sample-inventory/group_vars
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../../inventory/sample/group_vars
|
||||
342
contrib/terraform/openstack/variables.tf
Normal file
342
contrib/terraform/openstack/variables.tf
Normal file
@@ -0,0 +1,342 @@
|
||||
variable "cluster_name" {
|
||||
default = "example"
|
||||
}
|
||||
|
||||
variable "az_list" {
|
||||
description = "List of Availability Zones to use for masters in your OpenStack cluster"
|
||||
type = list(string)
|
||||
default = ["nova"]
|
||||
}
|
||||
|
||||
variable "az_list_node" {
|
||||
description = "List of Availability Zones to use for nodes in your OpenStack cluster"
|
||||
type = list(string)
|
||||
default = ["nova"]
|
||||
}
|
||||
|
||||
variable "number_of_bastions" {
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters" {
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters_no_etcd" {
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "number_of_etcd" {
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters_no_floating_ip" {
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "number_of_k8s_masters_no_floating_ip_no_etcd" {
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "number_of_k8s_nodes" {
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "number_of_k8s_nodes_no_floating_ip" {
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "number_of_gfs_nodes_no_floating_ip" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "bastion_root_volume_size_in_gb" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "etcd_root_volume_size_in_gb" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "master_root_volume_size_in_gb" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "node_root_volume_size_in_gb" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "gfs_root_volume_size_in_gb" {
|
||||
default = 0
|
||||
}
|
||||
|
||||
variable "gfs_volume_size_in_gb" {
|
||||
default = 75
|
||||
}
|
||||
|
||||
variable "master_volume_type" {
|
||||
default = "Default"
|
||||
}
|
||||
|
||||
variable "node_volume_type" {
|
||||
default = "Default"
|
||||
}
|
||||
|
||||
variable "public_key_path" {
|
||||
description = "The path of the ssh pub key"
|
||||
default = "~/.ssh/id_rsa.pub"
|
||||
}
|
||||
|
||||
variable "image" {
|
||||
description = "the image to use"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "image_gfs" {
|
||||
description = "Glance image to use for GlusterFS"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "ssh_user" {
|
||||
description = "used to fill out tags for ansible inventory"
|
||||
default = "ubuntu"
|
||||
}
|
||||
|
||||
variable "ssh_user_gfs" {
|
||||
description = "used to fill out tags for ansible inventory"
|
||||
default = "ubuntu"
|
||||
}
|
||||
|
||||
variable "flavor_bastion" {
|
||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "flavor_k8s_master" {
|
||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "flavor_k8s_node" {
|
||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "flavor_etcd" {
|
||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "flavor_gfs_node" {
|
||||
description = "Use 'openstack flavor list' command to see what your OpenStack instance uses for IDs"
|
||||
default = 3
|
||||
}
|
||||
|
||||
variable "network_name" {
|
||||
description = "name of the internal network to use"
|
||||
default = "internal"
|
||||
}
|
||||
|
||||
variable "use_existing_network" {
|
||||
description = "Use an existing network"
|
||||
type = bool
|
||||
default = "false"
|
||||
}
|
||||
|
||||
variable "network_dns_domain" {
|
||||
description = "dns_domain for the internal network"
|
||||
type = string
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "use_neutron" {
|
||||
description = "Use neutron"
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "port_security_enabled" {
|
||||
description = "Enable port security on the internal network"
|
||||
type = bool
|
||||
default = "true"
|
||||
}
|
||||
|
||||
variable "force_null_port_security" {
|
||||
description = "Force port security to be null. Some providers does not allow setting port security"
|
||||
type = bool
|
||||
default = "false"
|
||||
}
|
||||
|
||||
variable "subnet_cidr" {
|
||||
description = "Subnet CIDR block."
|
||||
type = string
|
||||
default = "10.0.0.0/24"
|
||||
}
|
||||
|
||||
variable "dns_nameservers" {
|
||||
description = "An array of DNS name server names used by hosts in this subnet."
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "k8s_master_fips" {
|
||||
description = "specific pre-existing floating IPs to use for master nodes"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "bastion_fips" {
|
||||
description = "specific pre-existing floating IPs to use for bastion node"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "floatingip_pool" {
|
||||
description = "name of the floating ip pool to use"
|
||||
default = "external"
|
||||
}
|
||||
|
||||
variable "wait_for_floatingip" {
|
||||
description = "Terraform will poll the instance until the floating IP has been associated."
|
||||
default = "false"
|
||||
}
|
||||
|
||||
variable "external_net" {
|
||||
description = "uuid of the external/public network"
|
||||
}
|
||||
|
||||
variable "supplementary_master_groups" {
|
||||
description = "supplementary kubespray ansible groups for masters, such kube_node"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "supplementary_node_groups" {
|
||||
description = "supplementary kubespray ansible groups for worker nodes, such as kube_ingress"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "bastion_allowed_remote_ips" {
|
||||
description = "An array of CIDRs allowed to SSH to hosts"
|
||||
type = list(string)
|
||||
default = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
variable "master_allowed_remote_ips" {
|
||||
description = "An array of CIDRs allowed to access API of masters"
|
||||
type = list(string)
|
||||
default = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
variable "k8s_allowed_remote_ips" {
|
||||
description = "An array of CIDRs allowed to SSH to hosts"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "k8s_allowed_egress_ips" {
|
||||
description = "An array of CIDRs allowed for egress traffic"
|
||||
type = list(string)
|
||||
default = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
variable "master_allowed_ports" {
|
||||
type = list(any)
|
||||
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_allowed_ports" {
|
||||
type = list(any)
|
||||
|
||||
default = [
|
||||
{
|
||||
"protocol" = "tcp"
|
||||
"port_range_min" = 30000
|
||||
"port_range_max" = 32767
|
||||
"remote_ip_prefix" = "0.0.0.0/0"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
variable "bastion_allowed_ports" {
|
||||
type = list(any)
|
||||
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "use_access_ip" {
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "master_server_group_policy" {
|
||||
description = "desired server group policy, e.g. anti-affinity"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "node_server_group_policy" {
|
||||
description = "desired server group policy, e.g. anti-affinity"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "etcd_server_group_policy" {
|
||||
description = "desired server group policy, e.g. anti-affinity"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "router_id" {
|
||||
description = "uuid of an externally defined router to use"
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "router_internal_port_id" {
|
||||
description = "uuid of the port connection our router to our network"
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "k8s_masters" {
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "k8s_nodes" {
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "additional_server_groups" {
|
||||
default = {}
|
||||
type = map(object({
|
||||
policy = string
|
||||
}))
|
||||
}
|
||||
|
||||
variable "extra_sec_groups" {
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "extra_sec_groups_name" {
|
||||
default = "custom"
|
||||
}
|
||||
|
||||
variable "image_uuid" {
|
||||
description = "uuid of image inside openstack to use"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "image_gfs_uuid" {
|
||||
description = "uuid of image to be used on gluster fs nodes. If empty defaults to image_uuid"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "image_master" {
|
||||
description = "uuid of image inside openstack to use"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "image_master_uuid" {
|
||||
description = "uuid of image to be used on master nodes. If empty defaults to image_uuid"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "group_vars_path" {
|
||||
description = "path to the inventory group vars directory"
|
||||
type = string
|
||||
default = "./group_vars"
|
||||
}
|
||||
9
contrib/terraform/openstack/versions.tf
Normal file
9
contrib/terraform/openstack/versions.tf
Normal file
@@ -0,0 +1,9 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
openstack = {
|
||||
source = "terraform-provider-openstack/openstack"
|
||||
version = "~> 1.17"
|
||||
}
|
||||
}
|
||||
required_version = ">= 1.3.0"
|
||||
}
|
||||
Reference in New Issue
Block a user