This commit is contained in:
havelight-ee
2023-05-30 14:44:26 +09:00
parent 9a3174deef
commit 4c32a7239d
2598 changed files with 164595 additions and 487 deletions

View File

@@ -0,0 +1,92 @@
# Deploying a Kubespray Kubernetes Cluster with GlusterFS
You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section.
## Using an Ansible inventory
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/sample/k8s_gfs_inventory`. Make sure that the settings on `inventory/sample/group_vars/all.yml` make sense with your deployment. Then execute change to the kubespray root folder, and execute (supposing that the machines are all using ubuntu):
```shell
ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./cluster.yml
```
This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute:
```shell
ansible-playbook -b --become-user=root -i inventory/sample/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml
```
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:
```shell
k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
```
## Using Terraform and Ansible
First step is to fill in a `my-kubespray-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
```ini
cluster_name = "cluster1"
number_of_k8s_masters = "1"
number_of_k8s_masters_no_floating_ip = "2"
number_of_k8s_nodes_no_floating_ip = "0"
number_of_k8s_nodes = "0"
public_key_path = "~/.ssh/my-desired-key.pub"
image = "Ubuntu 16.04"
ssh_user = "ubuntu"
flavor_k8s_node = "node-flavor-id-in-your-openstack"
flavor_k8s_master = "master-flavor-id-in-your-openstack"
network_name = "k8s-network"
floatingip_pool = "net_external"
# GlusterFS variables
flavor_gfs_node = "gluster-flavor-id-in-your-openstack"
image_gfs = "Ubuntu 16.04"
number_of_gfs_nodes_no_floating_ip = "3"
gfs_volume_size_in_gb = "50"
ssh_user_gfs = "ubuntu"
```
As explained in the general terraform/openstack guide, you need to source your OpenStack credentials file, add your ssh-key to the ssh-agent and setup environment variables for terraform:
```shell
$ source ~/.stackrc
$ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/my-desired-key
$ echo Setting up Terraform creds && \
export TF_VAR_username=${OS_USERNAME} && \
export TF_VAR_password=${OS_PASSWORD} && \
export TF_VAR_tenant=${OS_TENANT_NAME} && \
export TF_VAR_auth_url=${OS_AUTH_URL}
```
Then, standing on the kubespray directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
```shell
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
```
This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping).
Then, provision your Kubernetes (kubespray) cluster with the following ansible call:
```shell
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
```
Finally, provision the glusterfs nodes and add the Persistent Volume setup for GlusterFS in Kubernetes through the following ansible call:
```shell
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
```
If you need to destroy the cluster, you can run:
```shell
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kubespray-gluster-cluster.tfvars contrib/terraform/openstack
```

View File

@@ -0,0 +1,24 @@
---
- hosts: gfs-cluster
gather_facts: false
vars:
ansible_ssh_pipelining: false
roles:
- { role: bootstrap-os, tags: bootstrap-os}
- hosts: all
gather_facts: true
- hosts: gfs-cluster
vars:
ansible_ssh_pipelining: true
roles:
- { role: glusterfs/server }
- hosts: k8s_cluster
roles:
- { role: glusterfs/client }
- hosts: kube_control_plane[0]
roles:
- { role: kubernetes-pv }

View File

@@ -0,0 +1,140 @@
---
## Directory where the binaries will be installed
bin_dir: /usr/local/bin
## The access_ip variable is used to define how other nodes should access
## the node. This is used in flannel to allow other flannel nodes to see
## this node for example. The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
# access_ip: 1.1.1.1
## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
# loadbalancer_apiserver:
# address: 1.2.3.4
# port: 1234
## Internal loadbalancers for apiservers
# loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
# loadbalancer_apiserver_type: nginx # valid values "nginx" or "haproxy"
## If the cilium is going to be used in strict mode, we can use the
## localhost connection and not use the external LB. If this parameter is
## not specified, the first node to connect to kubeapi will be used.
# use_localhost_as_kubeapi_loadbalancer: true
## Local loadbalancer should use this port
## And must be set port 6443
loadbalancer_apiserver_port: 6443
## If loadbalancer_apiserver_healthcheck_port variable defined, enables proxy liveness check for nginx.
loadbalancer_apiserver_healthcheck_port: 8081
### OTHER OPTIONAL VARIABLES
## By default, Kubespray collects nameservers on the host. It then adds the previously collected nameservers in nameserverentries.
## If true, Kubespray does not include host nameservers in nameserverentries in dns_late stage. However, It uses the nameserver to make sure cluster installed safely in dns_early stage.
## Use this option with caution, you may need to define your dns servers. Otherwise, the outbound queries such as www.google.com may fail.
# disable_host_nameservers: false
## Upstream dns servers
# upstream_dns_servers:
# - 8.8.8.8
# - 8.8.4.4
## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using openstack-client before starting the playbook.
# cloud_provider:
## When cloud_provider is set to 'external', you can set the cloud controller to deploy
## Supported cloud controllers are: 'openstack', 'vsphere' and 'hcloud'
## When openstack or vsphere are used make sure to source in the required fields
# external_cloud_provider:
## Set these proxy values in order to update package manager and docker daemon to use proxies
# http_proxy: ""
# https_proxy: ""
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
# no_proxy: ""
## Some problems may occur when downloading files over https proxy due to ansible bug
## https://github.com/ansible/ansible/issues/32750. Set this variable to False to disable
## SSL validation of get_url module. Note that kubespray will still be performing checksum validation.
# download_validate_certs: False
## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
# additional_no_proxy: ""
## If you need to disable proxying of os package repositories but are still behind an http_proxy set
## skip_http_proxy_on_os_packages to true
## This will cause kubespray not to set proxy environment in /etc/yum.conf for centos and in /etc/apt/apt.conf for debian/ubuntu
## Special information for debian/ubuntu - you have to set the no_proxy variable, then apt package will install from your source of wish
# skip_http_proxy_on_os_packages: false
## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all
## pods will restart) when adding or removing workers. To override this behaviour by only including master nodes in the
## no_proxy variable, set below to true:
no_proxy_exclude_workers: false
## Certificate Management
## This setting determines whether certs are generated via scripts.
## Chose 'none' if you provide your own certificates.
## Option is "script", "none"
# cert_management: script
## Set to true to allow pre-checks to fail and continue deployment
# ignore_assert_errors: false
## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
# kube_read_only_port: 10255
## Set true to download and cache container
# download_container: true
## Deploy container engine
# Set false if you want to deploy container engine manually.
# deploy_container_engine: true
## Red Hat Enterprise Linux subscription registration
## Add either RHEL subscription Username/Password or Organization ID/Activation Key combination
## Update RHEL subscription purpose usage, role and SLA if necessary
# rh_subscription_username: ""
# rh_subscription_password: ""
# rh_subscription_org_id: ""
# rh_subscription_activation_key: ""
# rh_subscription_usage: "Development"
# rh_subscription_role: "Red Hat Enterprise Server"
# rh_subscription_sla: "Self-Support"
## Check if access_ip responds to ping. Set false if your firewall blocks ICMP.
# ping_access_ip: true
# sysctl_file_path to add sysctl conf to
# sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf"
## Variables for webhook token auth https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
kube_webhook_token_auth: false
kube_webhook_token_auth_url_skip_tls_verify: false
# kube_webhook_token_auth_url: https://...
## base64-encoded string of the webhook's CA certificate
# kube_webhook_token_auth_ca_data: "LS0t..."
## NTP Settings
# Start the ntpd or chrony service and enable it at system boot.
ntp_enabled: false
ntp_manage_config: false
ntp_servers:
- "0.pool.ntp.org iburst"
- "1.pool.ntp.org iburst"
- "2.pool.ntp.org iburst"
- "3.pool.ntp.org iburst"
## Used to control no_log attribute
unsafe_show_logs: false

View File

@@ -0,0 +1,9 @@
## To use AWS EBS CSI Driver to provision volumes, uncomment the first value
## and configure the parameters below
# aws_ebs_csi_enabled: true
# aws_ebs_csi_enable_volume_scheduling: true
# aws_ebs_csi_enable_volume_snapshot: false
# aws_ebs_csi_enable_volume_resizing: false
# aws_ebs_csi_controller_replicas: 1
# aws_ebs_csi_plugin_image_tag: latest
# aws_ebs_csi_extra_volume_tags: "Owner=owner,Team=team,Environment=environment'

View File

@@ -0,0 +1,40 @@
## When azure is used, you need to also set the following variables.
## see docs/azure.md for details on how to get these values
# azure_cloud:
# azure_tenant_id:
# azure_subscription_id:
# azure_aad_client_id:
# azure_aad_client_secret:
# azure_resource_group:
# azure_location:
# azure_subnet_name:
# azure_security_group_name:
# azure_security_group_resource_group:
# azure_vnet_name:
# azure_vnet_resource_group:
# azure_route_table_name:
# azure_route_table_resource_group:
# supported values are 'standard' or 'vmss'
# azure_vmtype: standard
## Azure Disk CSI credentials and parameters
## see docs/azure-csi.md for details on how to get these values
# azure_csi_tenant_id:
# azure_csi_subscription_id:
# azure_csi_aad_client_id:
# azure_csi_aad_client_secret:
# azure_csi_location:
# azure_csi_resource_group:
# azure_csi_vnet_name:
# azure_csi_vnet_resource_group:
# azure_csi_subnet_name:
# azure_csi_security_group_name:
# azure_csi_use_instance_metadata:
# azure_csi_tags: "Owner=owner,Team=team,Environment=environment'
## To enable Azure Disk CSI, uncomment below
# azure_csi_enabled: true
# azure_csi_controller_replicas: 1
# azure_csi_plugin_image_tag: latest

View File

@@ -0,0 +1,50 @@
---
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options
# containerd_storage_dir: "/var/lib/containerd"
# containerd_state_dir: "/run/containerd"
# containerd_oom_score: 0
# containerd_default_runtime: "runc"
# containerd_snapshotter: "native"
# containerd_runc_runtime:
# name: runc
# type: "io.containerd.runc.v2"
# engine: ""
# root: ""
# containerd_additional_runtimes:
# Example for Kata Containers as additional runtime:
# - name: kata
# type: "io.containerd.kata.v2"
# engine: ""
# root: ""
# containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216
# containerd_debug_level: "info"
# containerd_metrics_address: ""
# containerd_metrics_grpc_histogram: false
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define mirror.registry.io or 172.19.16.11:5000
## set "name": "url". insecure url must be started http://
## Port number is also needed if the default HTTPS port is not used.
# containerd_insecure_registries:
# "localhost": "http://127.0.0.1"
# "172.19.16.11:5000": "http://172.19.16.11:5000"
# containerd_registries:
# "docker.io": "https://registry-1.docker.io"
# containerd_max_container_log_line_size: -1
# containerd_registry_auth:
# - registry: 10.0.0.2:5000
# username: user
# password: pass

View File

@@ -0,0 +1,2 @@
## Does coreos need auto upgrade, default is true
# coreos_auto_upgrade: true

View File

@@ -0,0 +1,6 @@
# crio_insecure_registries:
# - 10.0.0.2:5000
# crio_registry_auth:
# - registry: 10.0.0.2:5000
# username: user
# password: pass

View File

@@ -0,0 +1,59 @@
---
## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
# docker_storage_options: -s overlay2
## Enable docker_container_storage_setup, it will configure devicemapper driver on Centos7 or RedHat7.
docker_container_storage_setup: false
## It must be define a disk path for docker_container_storage_setup_devs.
## Otherwise docker-storage-setup will be executed incorrectly.
# docker_container_storage_setup_devs: /dev/vdb
## Uncomment this if you want to change the Docker Cgroup driver (native.cgroupdriver)
## Valid options are systemd or cgroupfs, default is systemd
# docker_cgroup_driver: systemd
## Only set this if you have more than 3 nameservers:
## If true Kubespray will only use the first 3, otherwise it will fail
docker_dns_servers_strict: false
# Path used to store Docker data
docker_daemon_graph: "/var/lib/docker"
## Used to set docker daemon iptables options to true
docker_iptables_enabled: "false"
# Docker log options
# Rotate container stderr/stdout logs at 50m and keep last 5
docker_log_opts: "--log-opt max-size=50m --log-opt max-file=5"
# define docker bin_dir
docker_bin_dir: "/usr/bin"
# keep docker packages after installation; speeds up repeated ansible provisioning runs when '1'
# kubespray deletes the docker package on each run, so caching the package makes sense
docker_rpm_keepcache: 1
## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define 172.19.16.11 or mirror.registry.io
# docker_insecure_registries:
# - mirror.registry.io
# - 172.19.16.11
## Add other registry,example China registry mirror.
# docker_registry_mirrors:
# - https://registry.docker-cn.com
# - https://mirror.aliyuncs.com
## If non-empty will override default system MountFlags value.
## This option takes a mount propagation flag: shared, slave
## or private, which control whether mounts in the file system
## namespace set up for docker will receive or propagate mounts
## and unmounts. Leave empty for system default
# docker_mount_flags:
## A string of extra options to pass to the docker daemon.
## This string should be exactly as you wish it to appear.
# docker_options: ""

View File

@@ -0,0 +1,16 @@
---
## Directory where etcd data stored
etcd_data_dir: /var/lib/etcd
## Container runtime
## docker for docker, crio for cri-o and containerd for containerd.
## Additionally you can set this to kubeadm if you want to install etcd using kubeadm
## Kubeadm etcd deployment is experimental and only available for new deployments
## If this is not set, container manager will be inherited from the Kubespray defaults
## and not from k8s_cluster/k8s-cluster.yml, which might not be what you want.
## Also this makes possible to use different container manager for etcd nodes.
# container_manager: containerd
## Settings for etcd deployment type
# Set this to docker if you are using container_manager: docker
etcd_deployment_type: host

View File

@@ -0,0 +1,10 @@
## GCP compute Persistent Disk CSI Driver credentials and parameters
## See docs/gcp-pd-csi.md for information about the implementation
## Specify the path to the file containing the service account credentials
# gcp_pd_csi_sa_cred_file: "/my/safe/credentials/directory/cloud-sa.json"
## To enable GCP Persistent Disk CSI driver, uncomment below
# gcp_pd_csi_enabled: true
# gcp_pd_csi_controller_replicas: 1
# gcp_pd_csi_driver_image_tag: "v0.7.0-gke.0"

View File

@@ -0,0 +1,14 @@
## Values for the external Hcloud Cloud Controller
# external_hcloud_cloud:
# hcloud_api_token: ""
# token_secret_name: hcloud
# with_networks: false # Use the hcloud controller-manager with networks support https://github.com/hetznercloud/hcloud-cloud-controller-manager#networks-support
# service_account_name: cloud-controller-manager
#
# controller_image_tag: "latest"
# ## A dictionary of extra arguments to add to the openstack cloud controller manager daemonset
# ## Format:
# ## external_hcloud_cloud.controller_extra_args:
# ## arg1: "value1"
# ## arg2: "value2"
# controller_extra_args: {}

View File

@@ -0,0 +1,28 @@
## When Oracle Cloud Infrastructure is used, set these variables
# oci_private_key:
# oci_region_id:
# oci_tenancy_id:
# oci_user_id:
# oci_user_fingerprint:
# oci_compartment_id:
# oci_vnc_id:
# oci_subnet1_id:
# oci_subnet2_id:
## Override these default/optional behaviors if you wish
# oci_security_list_management: All
## If you would like the controller to manage specific lists per subnet. This is a mapping of subnet ocids to security list ocids. Below are examples.
# oci_security_lists:
# ocid1.subnet.oc1.phx.aaaaaaaasa53hlkzk6nzksqfccegk2qnkxmphkblst3riclzs4rhwg7rg57q: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
# ocid1.subnet.oc1.phx.aaaaaaaahuxrgvs65iwdz7ekwgg3l5gyah7ww5klkwjcso74u3e4i64hvtvq: ocid1.securitylist.oc1.iad.aaaaaaaaqti5jsfvyw6ejahh7r4okb2xbtuiuguswhs746mtahn72r7adt7q
## If oci_use_instance_principals is true, you do not need to set the region, tenancy, user, key, passphrase, or fingerprint
# oci_use_instance_principals: false
# oci_cloud_controller_version: 0.6.0
## If you would like to control OCI query rate limits for the controller
# oci_rate_limit:
# rate_limit_qps_read:
# rate_limit_qps_write:
# rate_limit_bucket_read:
# rate_limit_bucket_write:
## Other optional variables
# oci_cloud_controller_pull_source: (default iad.ocir.io/oracle/cloud-provider-oci)
# oci_cloud_controller_pull_secret: (name of pull secret to use if you define your own mirror above)

View File

@@ -0,0 +1,103 @@
---
## Global Offline settings
### Private Container Image Registry
# registry_host: "myprivateregisry.com"
# files_repo: "http://myprivatehttpd"
### If using CentOS, RedHat, AlmaLinux or Fedora
# yum_repo: "http://myinternalyumrepo"
### If using Debian
# debian_repo: "http://myinternaldebianrepo"
### If using Ubuntu
# ubuntu_repo: "http://myinternalubunturepo"
## Container Registry overrides
# kube_image_repo: "{{ registry_host }}"
# gcr_image_repo: "{{ registry_host }}"
# github_image_repo: "{{ registry_host }}"
# docker_image_repo: "{{ registry_host }}"
# quay_image_repo: "{{ registry_host }}"
## Kubernetes components
# kubeadm_download_url: "{{ files_repo }}/storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
# kubectl_download_url: "{{ files_repo }}/storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubectl"
# kubelet_download_url: "{{ files_repo }}/storage.googleapis.com/kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/kubelet"
## CNI Plugins
# cni_download_url: "{{ files_repo }}/github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
## cri-tools
# crictl_download_url: "{{ files_repo }}/github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
## [Optional] etcd: only if you **DON'T** use etcd_deployment=host
# etcd_download_url: "{{ files_repo }}/github.com/etcd-io/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
# [Optional] Calico: If using Calico network plugin
# calicoctl_download_url: "{{ files_repo }}/github.com/projectcalico/calico/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# calicoctl_alternate_download_url: "{{ files_repo }}/github.com/projectcalico/calicoctl/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
# [Optional] Calico with kdd: If using Calico network plugin with kdd datastore
# calico_crds_download_url: "{{ files_repo }}/github.com/projectcalico/calico/archive/{{ calico_version }}.tar.gz"
# [Optional] Cilium: If using Cilium network plugin
# ciliumcli_download_url: "{{ files_repo }}/github.com/cilium/cilium-cli/releases/download/{{ cilium_cli_version }}/cilium-linux-{{ image_arch }}.tar.gz"
# [Optional] Flannel: If using Falnnel network plugin
# flannel_cni_download_url: "{{ files_repo }}/kubernetes/flannel/{{ flannel_cni_version }}/flannel-{{ image_arch }}"
# [Optional] helm: only if you set helm_enabled: true
# helm_download_url: "{{ files_repo }}/get.helm.sh/helm-{{ helm_version }}-linux-{{ image_arch }}.tar.gz"
# [Optional] crun: only if you set crun_enabled: true
# crun_download_url: "{{ files_repo }}/github.com/containers/crun/releases/download/{{ crun_version }}/crun-{{ crun_version }}-linux-{{ image_arch }}"
# [Optional] kata: only if you set kata_containers_enabled: true
# kata_containers_download_url: "{{ files_repo }}/github.com/kata-containers/kata-containers/releases/download/{{ kata_containers_version }}/kata-static-{{ kata_containers_version }}-{{ ansible_architecture }}.tar.xz"
# [Optional] cri-dockerd: only if you set container_manager: docker
# cri_dockerd_download_url: "{{ files_repo }}/github.com/Mirantis/cri-dockerd/releases/download/v{{ cri_dockerd_version }}/cri-dockerd-{{ cri_dockerd_version }}.{{ image_arch }}.tgz"
# [Optional] cri-o: only if you set container_manager: crio
# crio_download_base: "download.opensuse.org/repositories/devel:kubic:libcontainers:stable"
# crio_download_crio: "http://{{ crio_download_base }}:/cri-o:/"
# [Optional] runc,containerd: only if you set container_runtime: containerd
# runc_download_url: "{{ files_repo }}/github.com/opencontainers/runc/releases/download/{{ runc_version }}/runc.{{ image_arch }}"
# containerd_download_url: "{{ files_repo }}/github.com/containerd/containerd/releases/download/v{{ containerd_version }}/containerd-{{ containerd_version }}-linux-{{ image_arch }}.tar.gz"
# nerdctl_download_url: "{{ files_repo }}/github.com/containerd/nerdctl/releases/download/v{{ nerdctl_version }}/nerdctl-{{ nerdctl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
# [Optional] runsc,containerd-shim-runsc: only if you set gvisor_enabled: true
# gvisor_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/runsc"
# gvisor_containerd_shim_runsc_download_url: "{{ files_repo }}/storage.googleapis.com/gvisor/releases/release/{{ gvisor_version }}/{{ ansible_architecture }}/containerd-shim-runsc-v1"
## CentOS/Redhat/AlmaLinux
### For EL7, base and extras repo must be available, for EL8, baseos and appstream
### By default we enable those repo automatically
# rhel_enable_repos: false
### Docker / Containerd
# docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/$releasever/$basearch"
# docker_rh_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
## Fedora
### Docker
# docker_fedora_repo_base_url: "{{ yum_repo }}/docker-ce/{{ ansible_distribution_major_version }}/{{ ansible_architecture }}"
# docker_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
### Containerd
# containerd_fedora_repo_base_url: "{{ yum_repo }}/containerd"
# containerd_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
## Debian
### Docker
# docker_debian_repo_base_url: "{{ debian_repo }}/docker-ce"
# docker_debian_repo_gpgkey: "{{ debian_repo }}/docker-ce/gpg"
### Containerd
# containerd_debian_repo_base_url: "{{ debian_repo }}/containerd"
# containerd_debian_repo_gpgkey: "{{ debian_repo }}/containerd/gpg"
# containerd_debian_repo_repokey: 'YOURREPOKEY'
## Ubuntu
### Docker
# docker_ubuntu_repo_base_url: "{{ ubuntu_repo }}/docker-ce"
# docker_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/docker-ce/gpg"
### Containerd
# containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd"
# containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"
# containerd_ubuntu_repo_repokey: 'YOURREPOKEY'

View File

@@ -0,0 +1,49 @@
## When OpenStack is used, Cinder version can be explicitly specified if autodetection fails (Fixed in 1.9: https://github.com/kubernetes/kubernetes/issues/50461)
# openstack_blockstorage_version: "v1/v2/auto (default)"
# openstack_blockstorage_ignore_volume_az: yes
## When OpenStack is used, if LBaaSv2 is available you can enable it with the following 2 variables.
# openstack_lbaas_enabled: True
# openstack_lbaas_subnet_id: "Neutron subnet ID (not network ID) to create LBaaS VIP"
## To enable automatic floating ip provisioning, specify a subnet.
# openstack_lbaas_floating_network_id: "Neutron network ID (not subnet ID) to get floating IP from, disabled by default"
## Override default LBaaS behavior
# openstack_lbaas_use_octavia: False
# openstack_lbaas_method: "ROUND_ROBIN"
# openstack_lbaas_provider: "haproxy"
# openstack_lbaas_create_monitor: "yes"
# openstack_lbaas_monitor_delay: "1m"
# openstack_lbaas_monitor_timeout: "30s"
# openstack_lbaas_monitor_max_retries: "3"
## Values for the external OpenStack Cloud Controller
# external_openstack_lbaas_network_id: "Neutron network ID to create LBaaS VIP"
# external_openstack_lbaas_subnet_id: "Neutron subnet ID to create LBaaS VIP"
# external_openstack_lbaas_floating_network_id: "Neutron network ID to get floating IP from"
# external_openstack_lbaas_floating_subnet_id: "Neutron subnet ID to get floating IP from"
# external_openstack_lbaas_method: "ROUND_ROBIN"
# external_openstack_lbaas_provider: "octavia"
# external_openstack_lbaas_create_monitor: false
# external_openstack_lbaas_monitor_delay: "1m"
# external_openstack_lbaas_monitor_timeout: "30s"
# external_openstack_lbaas_monitor_max_retries: "3"
# external_openstack_lbaas_manage_security_groups: false
# external_openstack_lbaas_internal_lb: false
# external_openstack_network_ipv6_disabled: false
# external_openstack_network_internal_networks: []
# external_openstack_network_public_networks: []
# external_openstack_metadata_search_order: "configDrive,metadataService"
## Application credentials to authenticate against Keystone API
## Those settings will take precedence over username and password that might be set your environment
## All of them are required
# external_openstack_application_credential_name:
# external_openstack_application_credential_id:
# external_openstack_application_credential_secret:
## The tag of the external OpenStack Cloud Controller image
# external_openstack_cloud_controller_image_tag: "latest"
## To use Cinder CSI plugin to provision volumes set this value to true
## Make sure to source in the openstack credentials
# cinder_csi_enabled: true
# cinder_csi_controller_replicas: 1

View File

@@ -0,0 +1,24 @@
## Repo for UpClouds csi-driver: https://github.com/UpCloudLtd/upcloud-csi
## To use UpClouds CSI plugin to provision volumes set this value to true
## Remember to set UPCLOUD_USERNAME and UPCLOUD_PASSWORD
# upcloud_csi_enabled: true
# upcloud_csi_controller_replicas: 1
## Override used image tags
# upcloud_csi_provisioner_image_tag: "v3.1.0"
# upcloud_csi_attacher_image_tag: "v3.4.0"
# upcloud_csi_resizer_image_tag: "v1.4.0"
# upcloud_csi_plugin_image_tag: "v0.3.3"
# upcloud_csi_node_image_tag: "v2.5.0"
# upcloud_tolerations: []
## Storage class options
# storage_classes:
# - name: standard
# is_default: true
# expand_persistent_volumes: true
# parameters:
# tier: maxiops
# - name: hdd
# is_default: false
# expand_persistent_volumes: true
# parameters:
# tier: hdd

View File

@@ -0,0 +1,32 @@
## Values for the external vSphere Cloud Provider
# external_vsphere_vcenter_ip: "myvcenter.domain.com"
# external_vsphere_vcenter_port: "443"
# external_vsphere_insecure: "true"
# external_vsphere_user: "administrator@vsphere.local" # Can also be set via the `VSPHERE_USER` environment variable
# external_vsphere_password: "K8s_admin" # Can also be set via the `VSPHERE_PASSWORD` environment variable
# external_vsphere_datacenter: "DATACENTER_name"
# external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
## Vsphere version where located VMs
# external_vsphere_version: "6.7u3"
## Tags for the external vSphere Cloud Provider images
## gcr.io/cloud-provider-vsphere/cpi/release/manager
# external_vsphere_cloud_controller_image_tag: "latest"
## gcr.io/cloud-provider-vsphere/csi/release/syncer
# vsphere_syncer_image_tag: "v2.5.1"
## registry.k8s.io/sig-storage/csi-attacher
# vsphere_csi_attacher_image_tag: "v3.4.0"
## gcr.io/cloud-provider-vsphere/csi/release/driver
# vsphere_csi_controller: "v2.5.1"
## registry.k8s.io/sig-storage/livenessprobe
# vsphere_csi_liveness_probe_image_tag: "v2.6.0"
## registry.k8s.io/sig-storage/csi-provisioner
# vsphere_csi_provisioner_image_tag: "v3.1.0"
## registry.k8s.io/sig-storage/csi-resizer
## makes sense only for vSphere version >=7.0
# vsphere_csi_resizer_tag: "v1.3.0"
## To use vSphere CSI plugin to provision volumes set this value to true
# vsphere_csi_enabled: true
# vsphere_csi_controller_replicas: 1

View File

@@ -0,0 +1,26 @@
---
## Etcd auto compaction retention for mvcc key value store in hour
# etcd_compaction_retention: 0
## Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
# etcd_metrics: basic
## Etcd is restricted by default to 512M on systems under 4GB RAM, 512MB is not enough for much more than testing.
## Set this if your etcd nodes have less than 4GB but you want more RAM for etcd. Set to 0 for unrestricted RAM.
## This value is only relevant when deploying etcd with `etcd_deployment_type: docker`
# etcd_memory_limit: "512M"
## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
## etcd documentation for more information.
# 8G is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
# etcd_quota_backend_bytes: "2147483648"
# Maximum client request size in bytes the server will accept.
# etcd is designed to handle small key value pairs typical for metadata.
# Larger requests will work, but may increase the latency of other requests
# etcd_max_request_bytes: "1572864"
### ETCD: disable peer client cert authentication.
# This affects ETCD_PEER_CLIENT_CERT_AUTH variable
# etcd_peer_client_auth: true

View File

@@ -0,0 +1,228 @@
---
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
# dashboard_enabled: false
# Helm deployment
helm_enabled: false
# Registry deployment
registry_enabled: false
# registry_namespace: kube-system
# registry_storage_class: ""
# registry_disk_size: "10Gi"
# Metrics Server deployment
metrics_server_enabled: false
# metrics_server_container_port: 4443
# metrics_server_kubelet_insecure_tls: true
# metrics_server_metric_resolution: 15s
# metrics_server_kubelet_preferred_address_types: "InternalIP,ExternalIP,Hostname"
# metrics_server_host_network: false
# metrics_server_replicas: 1
# Rancher Local Path Provisioner
local_path_provisioner_enabled: false
# local_path_provisioner_namespace: "local-path-storage"
# local_path_provisioner_storage_class: "local-path"
# local_path_provisioner_reclaim_policy: Delete
# local_path_provisioner_claim_root: /opt/local-path-provisioner/
# local_path_provisioner_debug: false
# local_path_provisioner_image_repo: "rancher/local-path-provisioner"
# local_path_provisioner_image_tag: "v0.0.22"
# local_path_provisioner_helper_image_repo: "busybox"
# local_path_provisioner_helper_image_tag: "latest"
# Local volume provisioner deployment
local_volume_provisioner_enabled: false
# local_volume_provisioner_namespace: kube-system
# local_volume_provisioner_nodelabels:
# - kubernetes.io/hostname
# - topology.kubernetes.io/region
# - topology.kubernetes.io/zone
# local_volume_provisioner_storage_classes:
# local-storage:
# host_dir: /mnt/disks
# mount_dir: /mnt/disks
# volume_mode: Filesystem
# fs_type: ext4
# fast-disks:
# host_dir: /mnt/fast-disks
# mount_dir: /mnt/fast-disks
# block_cleaner_command:
# - "/scripts/shred.sh"
# - "2"
# volume_mode: Filesystem
# fs_type: ext4
# local_volume_provisioner_tolerations:
# - effect: NoSchedule
# operator: Exists
# CSI Volume Snapshot Controller deployment, set this to true if your CSI is able to manage snapshots
# currently, setting cinder_csi_enabled=true would automatically enable the snapshot controller
# Longhorn is an extenal CSI that would also require setting this to true but it is not included in kubespray
# csi_snapshot_controller_enabled: false
# csi snapshot namespace
# snapshot_controller_namespace: kube-system
# CephFS provisioner deployment
cephfs_provisioner_enabled: false
# cephfs_provisioner_namespace: "cephfs-provisioner"
# cephfs_provisioner_cluster: ceph
# cephfs_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# cephfs_provisioner_admin_id: admin
# cephfs_provisioner_secret: secret
# cephfs_provisioner_storage_class: cephfs
# cephfs_provisioner_reclaim_policy: Delete
# cephfs_provisioner_claim_root: /volumes
# cephfs_provisioner_deterministic_names: true
# RBD provisioner deployment
rbd_provisioner_enabled: false
# rbd_provisioner_namespace: rbd-provisioner
# rbd_provisioner_replicas: 2
# rbd_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# rbd_provisioner_pool: kube
# rbd_provisioner_admin_id: admin
# rbd_provisioner_secret_name: ceph-secret-admin
# rbd_provisioner_secret: ceph-key-admin
# rbd_provisioner_user_id: kube
# rbd_provisioner_user_secret_name: ceph-secret-user
# rbd_provisioner_user_secret: ceph-key-user
# rbd_provisioner_user_secret_namespace: rbd-provisioner
# rbd_provisioner_fs_type: ext4
# rbd_provisioner_image_format: "2"
# rbd_provisioner_image_features: layering
# rbd_provisioner_storage_class: rbd
# rbd_provisioner_reclaim_policy: Delete
# Nginx ingress controller deployment
ingress_nginx_enabled: false
# ingress_nginx_host_network: false
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
# kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
# map-hash-bucket-size: "128"
# ssl-protocols: "TLSv1.2 TLSv1.3"
# ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
# 53: "kube-system/coredns:53"
# ingress_nginx_extra_args:
# - --default-ssl-certificate=default/foo-tls
# ingress_nginx_termination_grace_period_seconds: 300
# ingress_nginx_class: nginx
# ALB ingress controller deployment
ingress_alb_enabled: false
# alb_ingress_aws_region: "us-east-1"
# alb_ingress_restrict_scheme: "false"
# Enables logging on all outbound requests sent to the AWS API.
# If logging is desired, set to true.
# alb_ingress_aws_debug: "false"
# Cert manager deployment
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"
# cert_manager_tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
# - key: node-role.kubernetes.io/control-plane
# effect: NoSchedule
# cert_manager_affinity:
# nodeAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 100
# preference:
# matchExpressions:
# - key: node-role.kubernetes.io/control-plane
# operator: In
# values:
# - ""
# cert_manager_nodeselector:
# kubernetes.io/os: "linux"
# cert_manager_trusted_internal_ca: |
# -----BEGIN CERTIFICATE-----
# [REPLACE with your CA certificate]
# -----END CERTIFICATE-----
# cert_manager_leader_election_namespace: kube-system
# MetalLB deployment
metallb_enabled: false
metallb_speaker_enabled: "{{ metallb_enabled }}"
# metallb_ip_range:
# - "10.5.0.50-10.5.0.99"
# metallb_pool_name: "loadbalanced"
# metallb_auto_assign: true
# metallb_avoid_buggy_ips: false
# metallb_speaker_nodeselector:
# kubernetes.io/os: "linux"
# metallb_controller_nodeselector:
# kubernetes.io/os: "linux"
# metallb_speaker_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# metallb_controller_tolerations:
# - key: "node-role.kubernetes.io/master"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# - key: "node-role.kubernetes.io/control-plane"
# operator: "Equal"
# value: ""
# effect: "NoSchedule"
# metallb_version: v0.12.1
# metallb_protocol: "layer2"
# metallb_port: "7472"
# metallb_memberlist_port: "7946"
# metallb_additional_address_pools:
# kube_service_pool:
# ip_range:
# - "10.5.1.50-10.5.1.99"
# protocol: "layer2"
# auto_assign: false
# avoid_buggy_ips: false
# metallb_protocol: "bgp"
# metallb_peers:
# - peer_address: 192.0.2.1
# peer_asn: 64512
# my_asn: 4200000000
# - peer_address: 192.0.2.2
# peer_asn: 64513
# my_asn: 4200000000
argocd_enabled: false
# argocd_version: v2.5.5
# argocd_namespace: argocd
# Default password:
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
# ---
# The initial password is autogenerated to be the pod name of the Argo CD API server. This can be retrieved with the command:
# kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2
# ---
# Use the following var to set admin password
# argocd_admin_password: "password"
# The plugin manager for kubectl
krew_enabled: false
krew_root_dir: "/usr/local/krew"

View File

@@ -0,0 +1,350 @@
---
# Kubernetes configuration dirs and system namespace.
# Those are where all the additional config stuff goes
# the kubernetes normally puts in /srv/kubernetes.
# This puts them in a sane location and namespace.
# Editing those values will almost surely break something.
kube_config_dir: /etc/kubernetes
kube_script_dir: "{{ bin_dir }}/kubernetes-scripts"
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
# This is where all the cert scripts and certs will be located
kube_cert_dir: "{{ kube_config_dir }}/ssl"
# This is where all of the bearer tokens will be stored
kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.25.5
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)
local_release_dir: "/tmp/releases"
# Random shifts for retrying failed ops like pushing/downloading
retry_stagger: 5
# This is the user that owns tha cluster installation.
kube_owner: kube
# This is the group that the cert creation scripts chgrp the
# cert files to. Not really changeable...
kube_cert_group: kube-cert
# Cluster Loglevel configuration
kube_log_level: 2
# Directory where credentials will be stored
credentials_dir: "{{ inventory_dir }}/credentials"
## It is possible to activate / deactivate selected authentication methods (oidc, static token auth)
# kube_oidc_auth: false
# kube_token_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/
## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...)
# kube_oidc_url: https:// ...
# kube_oidc_client_id: kubernetes
## Optional settings for OIDC
# kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
# kube_oidc_username_claim: sub
# kube_oidc_username_prefix: 'oidc:'
# kube_oidc_groups_claim: groups
# kube_oidc_groups_prefix: 'oidc:'
## Variables to control webhook authn/authz
# kube_webhook_token_auth: false
# kube_webhook_token_auth_url: https://...
# kube_webhook_token_auth_url_skip_tls_verify: false
## For webhook authorization, authorization_modes must include Webhook
# kube_webhook_authorization: false
# kube_webhook_authorization_url: https://...
# kube_webhook_authorization_url_skip_tls_verify: false
# Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
kube_network_plugin_multus: false
# Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18
# internal network. When used, it will assign IP
# addresses from this range to individual pods.
# This network must be unused in your network infrastructure!
kube_pods_subnet: 10.233.64.0/18
# internal network node size allocation (optional). This is the size allocated
# to each node for pod IP address allocation. Note that the number of pods per node is
# also limited by the kubelet_max_pods variable which defaults to 110.
#
# Example:
# Up to 64 nodes and up to 254 or kubelet_max_pods (the lowest of the two) pods per node:
# - kube_pods_subnet: 10.233.64.0/18
# - kube_network_node_prefix: 24
# - kubelet_max_pods: 110
#
# Example:
# Up to 128 nodes and up to 126 or kubelet_max_pods (the lowest of the two) pods per node:
# - kube_pods_subnet: 10.233.64.0/18
# - kube_network_node_prefix: 25
# - kubelet_max_pods: 110
kube_network_node_prefix: 24
# Configure Dual Stack networking (i.e. both IPv4 and IPv6)
enable_dual_stack_networks: false
# Kubernetes internal network for IPv6 services, unused block of space.
# This is only used if enable_dual_stack_networks is set to true
# This provides 4096 IPv6 IPs
kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116
# Internal network. When used, it will assign IPv6 addresses from this range to individual pods.
# This network must not already be in your network infrastructure!
# This is only used if enable_dual_stack_networks is set to true.
# This provides room for 256 nodes with 254 pods per node.
kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112
# IPv6 subnet size allocated to each for pods.
# This is only used if enable_dual_stack_networks is set to true
# This provides room for 254 pods per node.
kube_network_node_prefix_ipv6: 120
# The port the API Server will be listening on.
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443 # (https)
# Kube-proxy proxyMode configuration.
# Can be ipvs, iptables
kube_proxy_mode: ipvs
# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
# must be set to true for MetalLB, kube-vip(ARP enabled) to work
kube_proxy_strict_arp: false
# A string slice of values which specify the addresses to use for NodePorts.
# Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32).
# The default empty string slice ([]) means to use all local addresses.
# kube_proxy_nodeport_addresses_cidr is retained for legacy config
kube_proxy_nodeport_addresses: >-
{%- if kube_proxy_nodeport_addresses_cidr is defined -%}
[{{ kube_proxy_nodeport_addresses_cidr }}]
{%- else -%}
[]
{%- endif -%}
# If non-empty, will use this string as identification instead of the actual hostname
# kube_override_hostname: >-
# {%- if cloud_provider is defined and cloud_provider in [ 'aws' ] -%}
# {%- else -%}
# {{ inventory_hostname }}
# {%- endif -%}
## Encrypting Secret Data at Rest
kube_encrypt_secret_data: false
# Graceful Node Shutdown (Kubernetes >= 1.21.0), see https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/
# kubelet_shutdown_grace_period had to be greater than kubelet_shutdown_grace_period_critical_pods to allow
# non-critical podsa to also terminate gracefully
# kubelet_shutdown_grace_period: 60s
# kubelet_shutdown_grace_period_critical_pods: 20s
# DNS configuration.
# Kubernetes cluster name, also will be used as DNS domain
cluster_name: cluster.local
# Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods
ndots: 2
# dns_timeout: 2
# dns_attempts: 2
# Custom search domains to be added in addition to the default cluster search domains
# searchdomains:
# - svc.{{ cluster_name }}
# - default.svc.{{ cluster_name }}
# Remove default cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
# remove_default_searchdomains: false
# Can be coredns, coredns_dual, manual or none
dns_mode: coredns
# Set manual server if using a custom cluster DNS server
# manual_dns_server: 10.x.x.x
# Enable nodelocal dns cache
enable_nodelocaldns: true
enable_nodelocaldns_secondary: false
nodelocaldns_ip: 169.254.25.10
nodelocaldns_health_port: 9254
nodelocaldns_second_health_port: 9256
nodelocaldns_bind_metrics_host_ip: false
nodelocaldns_secondary_skew_seconds: 5
# nodelocaldns_external_zones:
# - zones:
# - example.com
# - example.io:1053
# nameservers:
# - 1.1.1.1
# - 2.2.2.2
# cache: 5
# - zones:
# - https://mycompany.local:4453
# nameservers:
# - 192.168.0.53
# cache: 0
# - zones:
# - mydomain.tld
# nameservers:
# - 10.233.0.3
# cache: 5
# rewrite:
# - name website.tld website.namespace.svc.cluster.local
# Enable k8s_external plugin for CoreDNS
enable_coredns_k8s_external: false
coredns_k8s_external_zone: k8s_external.local
# Enable endpoint_pod_names option for kubernetes plugin
enable_coredns_k8s_endpoint_pod_names: false
# Set forward options for upstream DNS servers in coredns (and nodelocaldns) config
# dns_upstream_forward_extra_opts:
# policy: sequential
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: host_resolvconf
# Deploy netchecker app to verify DNS resolve as an HTTP service
deploy_netchecker: false
# Ip address of the kubernetes skydns service
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}"
skydns_server_secondary: "{{ kube_service_addresses|ipaddr('net')|ipaddr(4)|ipaddr('address') }}"
dns_domain: "{{ cluster_name }}"
## Container runtime
## docker for docker, crio for cri-o and containerd for containerd.
## Default: containerd
container_manager: containerd
# Additional container runtimes
kata_containers_enabled: false
kubeadm_certificate_key: "{{ lookup('password', credentials_dir + '/kubeadm_certificate_key.creds length=64 chars=hexdigits') | lower }}"
# K8s image pull policy (imagePullPolicy)
k8s_image_pull_policy: IfNotPresent
# audit log for kubernetes
kubernetes_audit: false
# define kubelet config dir for dynamic kubelet
# kubelet_config_dir:
default_kubelet_config_dir: "{{ kube_config_dir }}/dynamic_kubelet_dir"
# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
podsecuritypolicy_enabled: false
# Custom PodSecurityPolicySpec for restricted policy
# podsecuritypolicy_restricted_spec: {}
# Custom PodSecurityPolicySpec for privileged policy
# podsecuritypolicy_privileged_spec: {}
# Make a copy of kubeconfig on the host that runs Ansible in {{ inventory_dir }}/artifacts
# kubeconfig_localhost: false
# Use ansible_host as external api ip when copying over kubeconfig.
# kubeconfig_localhost_ansible_host: false
# Download kubectl onto the host that runs Ansible in {{ bin_dir }}
# kubectl_localhost: false
# A comma separated list of levels of node allocatable enforcement to be enforced by kubelet.
# Acceptable options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "".
# kubelet_enforce_node_allocatable: pods
## Optionally reserve resources for OS system daemons.
# system_reserved: true
## Uncomment to override default values
# system_memory_reserved: 512Mi
# system_cpu_reserved: 500m
# system_ephemeral_storage_reserved: 2Gi
## Reservation for master hosts
# system_master_memory_reserved: 256Mi
# system_master_cpu_reserved: 250m
# system_master_ephemeral_storage_reserved: 2Gi
## Eviction Thresholds to avoid system OOMs
# https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds
# eviction_hard: {}
# eviction_hard_control_plane: {}
# An alternative flexvolume plugin directory
# kubelet_flexvolumes_plugins_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
## Supplementary addresses that can be added in kubernetes ssl keys.
## That can be useful for example to setup a keepalived virtual IP
# supplementary_addresses_in_ssl_keys: [10.0.0.1, 10.0.0.2, 10.0.0.3]
## Running on top of openstack vms with cinder enabled may lead to unschedulable pods due to NoVolumeZoneConflict restriction in kube-scheduler.
## See https://github.com/kubernetes-sigs/kubespray/issues/2141
## Set this variable to true to get rid of this issue
volume_cross_zone_attachment: false
## Add Persistent Volumes Storage Class for corresponding cloud provider (supported: in-tree OpenStack, Cinder CSI,
## AWS EBS CSI, Azure Disk CSI, GCP Persistent Disk CSI)
persistent_volumes_enabled: false
## Container Engine Acceleration
## Enable container acceleration feature, for example use gpu acceleration in containers
# nvidia_accelerator_enabled: true
## Nvidia GPU driver install. Install will by done by a (init) pod running as a daemonset.
## Important: if you use Ubuntu then you should set in all.yml 'docker_storage_options: -s overlay2'
## Array with nvida_gpu_nodes, leave empty or comment if you don't want to install drivers.
## Labels and taints won't be set to nodes if they are not in the array.
# nvidia_gpu_nodes:
# - kube-gpu-001
# nvidia_driver_version: "384.111"
## flavor can be tesla or gtx
# nvidia_gpu_flavor: gtx
## NVIDIA driver installer images. Change them if you have trouble accessing gcr.io.
# nvidia_driver_install_centos_container: atzedevries/nvidia-centos-driver-installer:2
# nvidia_driver_install_ubuntu_container: gcr.io/google-containers/ubuntu-nvidia-driver-installer@sha256:7df76a0f0a17294e86f691c81de6bbb7c04a1b4b3d4ea4e7e2cccdc42e1f6d63
## NVIDIA GPU device plugin image.
# nvidia_gpu_device_plugin_container: "registry.k8s.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e"
## Support tls min version, Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
# tls_min_version: ""
## Support tls cipher suites.
# tls_cipher_suites: {}
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
# - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
# - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
# - TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
# - TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
# - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
# - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
# - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
# - TLS_ECDHE_RSA_WITH_RC4_128_SHA
# - TLS_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_RSA_WITH_AES_128_CBC_SHA
# - TLS_RSA_WITH_AES_128_CBC_SHA256
# - TLS_RSA_WITH_AES_128_GCM_SHA256
# - TLS_RSA_WITH_AES_256_CBC_SHA
# - TLS_RSA_WITH_AES_256_GCM_SHA384
# - TLS_RSA_WITH_RC4_128_SHA
## Amount of time to retain events. (default 1h0m0s)
event_ttl_duration: "1h0m0s"
## Automatically renew K8S control plane certificates on first Monday of each month
auto_renew_certificates: false
# First Monday of each month
# auto_renew_certificates_systemd_calendar: "Mon *-*-1,2,3,4,5,6,7 03:{{ groups['kube_control_plane'].index(inventory_hostname) }}0:00"
# kubeadm patches path
kubeadm_patches:
enabled: false
source_dir: "{{ inventory_dir }}/patches"
dest_dir: "{{ kube_config_dir }}/patches"

View File

@@ -0,0 +1,131 @@
---
# see roles/network_plugin/calico/defaults/main.yml
# the default value of name
calico_cni_name: k8s-pod-network
## With calico it is possible to distributed routes with border routers of the datacenter.
## Warning : enabling router peering will disable calico's default behavior ('node mesh').
## The subnets of each nodes will be distributed by the datacenter router
# peer_with_router: false
# Enables Internet connectivity from containers
# nat_outgoing: true
# Enables Calico CNI "host-local" IPAM plugin
# calico_ipam_host_local: true
# add default ippool name
# calico_pool_name: "default-pool"
# add default ippool blockSize (defaults kube_network_node_prefix)
calico_pool_blocksize: 26
# add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise)
# calico_pool_cidr: 1.2.3.4/5
# add default ippool CIDR to CNI config
# calico_cni_pool: true
# Add default IPV6 IPPool CIDR. Must be inside kube_pods_subnet_ipv6. Defaults to kube_pods_subnet_ipv6 if not set.
# calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
# Add default IPV6 IPPool CIDR to CNI config
# calico_cni_pool_ipv6: true
# Global as_num (/calico/bgp/v1/global/as_num)
# global_as_num: "64512"
# If doing peering with node-assigned asn where the globas does not match your nodes, you want this
# to be true. All other cases, false.
# calico_no_global_as_num: false
# You can set MTU value here. If left undefined or empty, it will
# not be specified in calico CNI config, so Calico will use built-in
# defaults. The value should be a number, not a string.
# calico_mtu: 1500
# Configure the MTU to use for workload interfaces and tunnels.
# - If Wireguard is enabled, subtract 60 from your network MTU (i.e 1500-60=1440)
# - Otherwise, if VXLAN or BPF mode is enabled, subtract 50 from your network MTU (i.e. 1500-50=1450)
# - Otherwise, if IPIP is enabled, subtract 20 from your network MTU (i.e. 1500-20=1480)
# - Otherwise, if not using any encapsulation, set to your network MTU (i.e. 1500)
# calico_veth_mtu: 1440
# Advertise Cluster IPs
# calico_advertise_cluster_ips: true
# Advertise Service External IPs
# calico_advertise_service_external_ips:
# - x.x.x.x/24
# - y.y.y.y/32
# Advertise Service LoadBalancer IPs
# calico_advertise_service_loadbalancer_ips:
# - x.x.x.x/24
# - y.y.y.y/16
# Choose data store type for calico: "etcd" or "kdd" (kubernetes datastore)
# calico_datastore: "kdd"
# Choose Calico iptables backend: "Legacy", "Auto" or "NFT"
# calico_iptables_backend: "Auto"
# Use typha (only with kdd)
# typha_enabled: false
# Generate TLS certs for secure typha<->calico-node communication
# typha_secure: false
# Scaling typha: 1 replica per 100 nodes is adequate
# Number of typha replicas
# typha_replicas: 1
# Set max typha connections
# typha_max_connections_lower_limit: 300
# Set calico network backend: "bird", "vxlan" or "none"
# bird enable BGP routing, required for ipip and no encapsulation modes
# calico_network_backend: vxlan
# IP in IP and VXLAN is mutualy exclusive modes.
# set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_ipip_mode: 'Never'
# set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_vxlan_mode: 'Always'
# set VXLAN port and VNI
# calico_vxlan_vni: 4096
# calico_vxlan_port: 4789
# Enable eBPF mode
# calico_bpf_enabled: false
# If you want to use non default IP_AUTODETECTION_METHOD, IP6_AUTODETECTION_METHOD for calico node set this option to one of:
# * can-reach=DESTINATION
# * interface=INTERFACE-REGEX
# see https://docs.projectcalico.org/reference/node/configuration
# calico_ip_auto_method: "interface=eth.*"
# calico_ip6_auto_method: "interface=eth.*"
# Set FELIX_MTUIFACEPATTERN, Pattern used to discover the hosts interface for MTU auto-detection.
# see https://projectcalico.docs.tigera.io/reference/felix/configuration
# calico_felix_mtu_iface_pattern: "^((en|wl|ww|sl|ib)[opsx].*|(eth|wlan|wwan).*)"
# Choose the iptables insert mode for Calico: "Insert" or "Append".
# calico_felix_chaininsertmode: Insert
# If you want use the default route interface when you use multiple interface with dynamique route (iproute2)
# see https://docs.projectcalico.org/reference/node/configuration : FELIX_DEVICEROUTESOURCEADDRESS
# calico_use_default_route_src_ipaddr: false
# Enable calico traffic encryption with wireguard
# calico_wireguard_enabled: false
# Under certain situations liveness and readiness probes may need tunning
# calico_node_livenessprobe_timeout: 10
# calico_node_readinessprobe_timeout: 10
# Calico apiserver (only with kdd)
# calico_apiserver_enabled: false

View File

@@ -0,0 +1,10 @@
# see roles/network_plugin/canal/defaults/main.yml
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is choosing using the node's
# default route.
# canal_iface: ""
# Whether or not to masquerade traffic to destinations not within
# the pod network.
# canal_masquerade: "true"

View File

@@ -0,0 +1,245 @@
---
# cilium_version: "v1.12.1"
# Log-level
# cilium_debug: false
# cilium_mtu: ""
# cilium_enable_ipv4: true
# cilium_enable_ipv6: false
# Cilium agent health port
# cilium_agent_health_port: "9879"
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
# These can be queried with:
# `kubectl get ciliumid`
# - "kvstore" stores identities in an etcd kvstore.
# - In order to support External Workloads, "crd" is required
# - Ref: https://docs.cilium.io/en/stable/gettingstarted/external-workloads/#setting-up-support-for-external-workloads-beta
# - KVStore operations are only required when cilium-operator is running with any of the below options:
# - --synchronize-k8s-services
# - --synchronize-k8s-nodes
# - --identity-allocation-mode=kvstore
# - Ref: https://docs.cilium.io/en/stable/internals/cilium_operator/#kvstore-operations
# cilium_identity_allocation_mode: kvstore
# Etcd SSL dirs
# cilium_cert_dir: /etc/cilium/certs
# kube_etcd_cacert_file: ca.pem
# kube_etcd_cert_file: cert.pem
# kube_etcd_key_file: cert-key.pem
# Limits for apps
# cilium_memory_limit: 500M
# cilium_cpu_limit: 500m
# cilium_memory_requests: 64M
# cilium_cpu_requests: 100m
# Overlay Network Mode
# cilium_tunnel_mode: vxlan
# Optional features
# cilium_enable_prometheus: false
# Enable if you want to make use of hostPort mappings
# cilium_enable_portmap: false
# Monitor aggregation level (none/low/medium/maximum)
# cilium_monitor_aggregation: medium
# The monitor aggregation flags determine which TCP flags which, upon the
# first observation, cause monitor notifications to be generated.
#
# Only effective when monitor aggregation is set to "medium" or higher.
# cilium_monitor_aggregation_flags: "all"
# Kube Proxy Replacement mode (strict/probe/partial)
# cilium_kube_proxy_replacement: probe
# If upgrading from Cilium < 1.5, you may want to override some of these options
# to prevent service disruptions. See also:
# http://docs.cilium.io/en/stable/install/upgrade/#changes-that-may-require-action
# cilium_preallocate_bpf_maps: false
# `cilium_tofqdns_enable_poller` is deprecated in 1.8, removed in 1.9
# cilium_tofqdns_enable_poller: false
# `cilium_enable_legacy_services` is deprecated in 1.6, removed in 1.9
# cilium_enable_legacy_services: false
# Unique ID of the cluster. Must be unique across all conneted clusters and
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
# This value is not defined by default
# cilium_cluster_id:
# Deploy cilium even if kube_network_plugin is not cilium.
# This enables to deploy cilium alongside another CNI to replace kube-proxy.
# cilium_deploy_additionally: false
# Auto direct nodes routes can be used to advertise pods routes in your cluster
# without any tunelling (with `cilium_tunnel_mode` sets to `disabled`).
# This works only if you have a L2 connectivity between all your nodes.
# You wil also have to specify the variable `cilium_native_routing_cidr` to
# make this work. Please refer to the cilium documentation for more
# information about this kind of setups.
# cilium_auto_direct_node_routes: false
# Allows to explicitly specify the IPv4 CIDR for native routing.
# When specified, Cilium assumes networking for this CIDR is preconfigured and
# hands traffic destined for that range to the Linux network stack without
# applying any SNAT.
# Generally speaking, specifying a native routing CIDR implies that Cilium can
# depend on the underlying networking stack to route packets to their
# destination. To offer a concrete example, if Cilium is configured to use
# direct routing and the Kubernetes CIDR is included in the native routing CIDR,
# the user must configure the routes to reach pods, either manually or by
# setting the auto-direct-node-routes flag.
# cilium_native_routing_cidr: ""
# Allows to explicitly specify the IPv6 CIDR for native routing.
# cilium_native_routing_cidr_ipv6: ""
# Enable transparent network encryption.
# cilium_encryption_enabled: false
# Encryption method. Can be either ipsec or wireguard.
# Only effective when `cilium_encryption_enabled` is set to true.
# cilium_encryption_type: "ipsec"
# Enable encryption for pure node to node traffic.
# This option is only effective when `cilium_encryption_type` is set to `ipsec`.
# cilium_ipsec_node_encryption: false
# If your kernel or distribution does not support WireGuard, Cilium agent can be configured to fall back on the user-space implementation.
# When this flag is enabled and Cilium detects that the kernel has no native support for WireGuard,
# it will fallback on the wireguard-go user-space implementation of WireGuard.
# This option is only effective when `cilium_encryption_type` is set to `wireguard`.
# cilium_wireguard_userspace_fallback: false
# IP Masquerade Agent
# https://docs.cilium.io/en/stable/concepts/networking/masquerading/
# By default, all packets from a pod destined to an IP address outside of the cilium_native_routing_cidr range are masqueraded
# cilium_ip_masq_agent_enable: false
### A packet sent from a pod to a destination which belongs to any CIDR from the nonMasqueradeCIDRs is not going to be masqueraded
# cilium_non_masquerade_cidrs:
# - 10.0.0.0/8
# - 172.16.0.0/12
# - 192.168.0.0/16
# - 100.64.0.0/10
# - 192.0.0.0/24
# - 192.0.2.0/24
# - 192.88.99.0/24
# - 198.18.0.0/15
# - 198.51.100.0/24
# - 203.0.113.0/24
# - 240.0.0.0/4
### Indicates whether to masquerade traffic to the link local prefix.
### If the masqLinkLocal is not set or set to false, then 169.254.0.0/16 is appended to the non-masquerade CIDRs list.
# cilium_masq_link_local: false
### A time interval at which the agent attempts to reload config from disk
# cilium_ip_masq_resync_interval: 60s
# Hubble
### Enable Hubble without install
# cilium_enable_hubble: false
### Enable Hubble Metrics
# cilium_enable_hubble_metrics: false
### if cilium_enable_hubble_metrics: true
# cilium_hubble_metrics: {}
# - dns
# - drop
# - tcp
# - flow
# - icmp
# - http
### Enable Hubble install
# cilium_hubble_install: false
### Enable auto generate certs if cilium_hubble_install: true
# cilium_hubble_tls_generate: false
# IP address management mode for v1.9+.
# https://docs.cilium.io/en/v1.9/concepts/networking/ipam/
# cilium_ipam_mode: kubernetes
# Extra arguments for the Cilium agent
# cilium_agent_custom_args: []
# For adding and mounting extra volumes to the cilium agent
# cilium_agent_extra_volumes: []
# cilium_agent_extra_volume_mounts: []
# cilium_agent_extra_env_vars: []
# cilium_operator_replicas: 2
# The address at which the cillium operator bind health check api
# cilium_operator_api_serve_addr: "127.0.0.1:9234"
## A dictionary of extra config variables to add to cilium-config, formatted like:
## cilium_config_extra_vars:
## var1: "value1"
## var2: "value2"
# cilium_config_extra_vars: {}
# For adding and mounting extra volumes to the cilium operator
# cilium_operator_extra_volumes: []
# cilium_operator_extra_volume_mounts: []
# Extra arguments for the Cilium Operator
# cilium_operator_custom_args: []
# Name of the cluster. Only relevant when building a mesh of clusters.
# cilium_cluster_name: default
# Make Cilium take ownership over the `/etc/cni/net.d` directory on the node, renaming all non-Cilium CNI configurations to `*.cilium_bak`.
# This ensures no Pods can be scheduled using other CNI plugins during Cilium agent downtime.
# Available for Cilium v1.10 and up.
# cilium_cni_exclusive: true
# Configure the log file for CNI logging with retention policy of 7 days.
# Disable CNI file logging by setting this field to empty explicitly.
# Available for Cilium v1.12 and up.
# cilium_cni_log_file: "/var/run/cilium/cilium-cni.log"
# -- Configure cgroup related configuration
# -- Enable auto mount of cgroup2 filesystem.
# When `cilium_cgroup_auto_mount` is enabled, cgroup2 filesystem is mounted at
# `cilium_cgroup_host_root` path on the underlying host and inside the cilium agent pod.
# If users disable `cilium_cgroup_auto_mount`, it's expected that users have mounted
# cgroup2 filesystem at the specified `cilium_cgroup_auto_mount` volume, and then the
# volume will be mounted inside the cilium agent pod at the same path.
# Available for Cilium v1.11 and up
# cilium_cgroup_auto_mount: true
# -- Configure cgroup root where cgroup2 filesystem is mounted on the host
# cilium_cgroup_host_root: "/run/cilium/cgroupv2"
# Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
# cilium_bpf_map_dynamic_size_ratio: "0.0"
# -- Enables masquerading of IPv4 traffic leaving the node from endpoints.
# Available for Cilium v1.10 and up
# cilium_enable_ipv4_masquerade: true
# -- Enables masquerading of IPv6 traffic leaving the node from endpoints.
# Available for Cilium v1.10 and up
# cilium_enable_ipv6_masquerade: true
# -- Enable native IP masquerade support in eBPF
# cilium_enable_bpf_masquerade: false
# -- Configure whether direct routing mode should route traffic via
# host stack (true) or directly and more efficiently out of BPF (false) if
# the kernel supports it. The latter has the implication that it will also
# bypass netfilter in the host namespace.
# cilium_enable_host_legacy_routing: true
# -- Enable use of the remote node identity.
# ref: https://docs.cilium.io/en/v1.7/install/upgrade/#configmap-remote-node-identity
# cilium_enable_remote_node_identity: true
# -- Enable the use of well-known identities.
# cilium_enable_well_known_identities: false
# cilium_enable_bpf_clock_probe: true
# -- Whether to enable CNP status updates.
# cilium_disable_cnp_status_updates: true

View File

@@ -0,0 +1,18 @@
# see roles/network_plugin/flannel/defaults/main.yml
## interface that should be used for flannel operations
## This is actually an inventory cluster-level item
# flannel_interface:
## Select interface that should be used for flannel operations by regexp on Name or IP
## This is actually an inventory cluster-level item
## example: select interface with ip from net 10.0.0.0/23
## single quote and escape backslashes
# flannel_interface_regexp: '10\\.0\\.[0-2]\\.\\d{1,3}'
# You can choose what type of flannel backend to use: 'vxlan', 'host-gw' or 'wireguard'
# please refer to flannel's docs : https://github.com/coreos/flannel/blob/master/README.md
# flannel_backend_type: "vxlan"
# flannel_vxlan_vni: 1
# flannel_vxlan_port: 8472
# flannel_vxlan_direct_routing: false

View File

@@ -0,0 +1,57 @@
---
# geneve or vlan
kube_ovn_network_type: geneve
# geneve, vxlan or stt. ATTENTION: some networkpolicy cannot take effect when using vxlan and stt need custom compile ovs kernel module
kube_ovn_tunnel_type: geneve
## The nic to support container network can be a nic name or a group of regex separated by comma e.g: 'enp6s0f0,eth.*', if empty will use the nic that the default route use.
# kube_ovn_iface: eth1
## The MTU used by pod iface in overlay networks (default iface MTU - 100)
# kube_ovn_mtu: 1333
## Enable hw-offload, disable traffic mirror and set the iface to the physical port. Make sure that there is an IP address bind to the physical port.
kube_ovn_hw_offload: false
# traffic mirror
kube_ovn_traffic_mirror: false
# kube_ovn_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
# kube_ovn_default_interface_name: eth0
kube_ovn_external_address: 8.8.8.8
kube_ovn_external_address_ipv6: 2400:3200::1
kube_ovn_external_dns: alauda.cn
# kube_ovn_default_gateway: 10.233.64.1,fd85:ee78:d8a6:8607::1:0
kube_ovn_default_gateway_check: true
kube_ovn_default_logical_gateway: false
# kube_ovn_default_exclude_ips: 10.16.0.1
kube_ovn_node_switch_cidr: 100.64.0.0/16
kube_ovn_node_switch_cidr_ipv6: fd00:100:64::/64
## vlan config, set default interface name and vlan id
# kube_ovn_default_interface_name: eth0
kube_ovn_default_vlan_id: 100
kube_ovn_vlan_name: product
## pod nic type, support: veth-pair or internal-port
kube_ovn_pod_nic_type: veth_pair
## Enable load balancer
kube_ovn_enable_lb: true
## Enable network policy support
kube_ovn_enable_np: true
## Enable external vpc support
kube_ovn_enable_external_vpc: true
## Enable checksum
kube_ovn_encap_checksum: true
## enable ssl
kube_ovn_enable_ssl: false
## dpdk
kube_ovn_dpdk_enabled: false

View File

@@ -0,0 +1,64 @@
# See roles/network_plugin/kube-router//defaults/main.yml
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
# kube_router_run_router: true
# Enables Network Policy -- sets up iptables to provide ingress firewall for pods
# kube_router_run_firewall: true
# Enables Service Proxy -- sets up IPVS for Kubernetes Services
# see docs/kube-router.md "Caveats" section
# kube_router_run_service_proxy: false
# Add Cluster IP of the service to the RIB so that it gets advertises to the BGP peers.
# kube_router_advertise_cluster_ip: false
# Add External IP of service to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_external_ip: false
# Add LoadBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_loadbalancer_ip: false
# Adjust manifest of kube-router daemonset template with DSR needed changes
# kube_router_enable_dsr: false
# Array of arbitrary extra arguments to kube-router, see
# https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md
# kube_router_extra_args: []
# ASN number of the cluster, used when communicating with external BGP routers
# kube_router_cluster_asn: ~
# ASN numbers of the BGP peer to which cluster nodes will advertise cluster ip and node's pod cidr.
# kube_router_peer_router_asns: ~
# The ip address of the external router to which all nodes will peer and advertise the cluster ip and pod cidr's.
# kube_router_peer_router_ips: ~
# The remote port of the external BGP to which all nodes will peer. If not set, default BGP port (179) will be used.
# kube_router_peer_router_ports: ~
# Setups node CNI to allow hairpin mode, requires node reboots, see
# https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md#hairpin-mode
# kube_router_support_hairpin_mode: false
# Select DNS Policy ClusterFirstWithHostNet, ClusterFirst, etc.
# kube_router_dns_policy: ClusterFirstWithHostNet
# Array of annotations for master
# kube_router_annotations_master: []
# Array of annotations for every node
# kube_router_annotations_node: []
# Array of common annotations for every node
# kube_router_annotations_all: []
# Enables scraping kube-router metrics with Prometheus
# kube_router_enable_metrics: false
# Path to serve Prometheus metrics on
# kube_router_metrics_path: /metrics
# Prometheus metrics port to use
# kube_router_metrics_port: 9255

View File

@@ -0,0 +1,6 @@
---
# private interface, on a l2-network
macvlan_interface: "eth1"
# Enable nat in default gateway network interface
enable_nat_default_gateway: true

View File

@@ -0,0 +1,64 @@
# see roles/network_plugin/weave/defaults/main.yml
# Weave's network password for encryption, if null then no network encryption.
# weave_password: ~
# If set to 1, disable checking for new Weave Net versions (default is blank,
# i.e. check is enabled)
# weave_checkpoint_disable: false
# Soft limit on the number of connections between peers. Defaults to 100.
# weave_conn_limit: 100
# Weave Net defaults to enabling hairpin on the bridge side of the veth pair
# for containers attached. If you need to disable hairpin, e.g. your kernel is
# one of those that can panic if hairpin is enabled, then you can disable it by
# setting `HAIRPIN_MODE=false`.
# weave_hairpin_mode: true
# The range of IP addresses used by Weave Net and the subnet they are placed in
# (CIDR format; default 10.32.0.0/12)
# weave_ipalloc_range: "{{ kube_pods_subnet }}"
# Set to 0 to disable Network Policy Controller (default is on)
# weave_expect_npc: "{{ enable_network_policy }}"
# List of addresses of peers in the Kubernetes cluster (default is to fetch the
# list from the api-server)
# weave_kube_peers: ~
# Set the initialization mode of the IP Address Manager (defaults to consensus
# amongst the KUBE_PEERS)
# weave_ipalloc_init: ~
# Set the IP address used as a gateway from the Weave network to the host
# network - this is useful if you are configuring the addon as a static pod.
# weave_expose_ip: ~
# Address and port that the Weave Net daemon will serve Prometheus-style
# metrics on (defaults to 0.0.0.0:6782)
# weave_metrics_addr: ~
# Address and port that the Weave Net daemon will serve status requests on
# (defaults to disabled)
# weave_status_addr: ~
# Weave Net defaults to 1376 bytes, but you can set a smaller size if your
# underlying network has a tighter limit, or set a larger size for better
# performance if your network supports jumbo frames (e.g. 8916)
# weave_mtu: 1376
# Set to 1 to preserve the client source IP address when accessing Service
# annotated with `service.spec.externalTrafficPolicy=Local`. The feature works
# only with Weave IPAM (default).
# weave_no_masq_local: true
# set to nft to use nftables backend for iptables (default is iptables)
# weave_iptables_backend: iptables
# Extra variables that passing to launch.sh, useful for enabling seed mode, see
# https://www.weave.works/docs/net/latest/tasks/ipam/ipam/
# weave_extra_args: ~
# Extra variables for weave_npc that passing to launch.sh, useful for change log level, ex --log-level=error
# weave_npc_extra_args: ~

View File

@@ -0,0 +1,44 @@
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1
# node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2
# node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3
# node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4
# node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5
# node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6
#
# ## GlusterFS nodes
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
# ## As in the previous case, you can set ip to give direct communication on internal IPs
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
# gfs_node3 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# [kube_control_plane]
# node1
# node2
# [etcd]
# node1
# node2
# node3
# [kube_node]
# node2
# node3
# node4
# node5
# node6
# [k8s_cluster:children]
# kube_node
# kube_control_plane
# [gfs-cluster]
# gfs_node1
# gfs_node2
# gfs_node3
# [network-storage:children]
# gfs-cluster

View File

@@ -0,0 +1,32 @@
---
## CentOS/RHEL/AlmaLinux specific variables
# Use the fastestmirror yum plugin
centos_fastestmirror_enabled: false
## Flatcar Container Linux specific variables
# Disable locksmithd or leave it in its current state
coreos_locksmithd_disable: false
## Oracle Linux specific variables
# Install public repo on Oracle Linux
use_oracle_public_repo: true
fedora_coreos_packages:
- python
- python3-libselinux
- ethtool # required in kubeadm preflight phase for verifying the environment
- ipset # required in kubeadm preflight phase for verifying the environment
- conntrack-tools # required by kube-proxy
## General
# Set the hostname to inventory_hostname
override_system_hostname: true
is_fedora_coreos: false
skip_http_proxy_on_os_packages: false
# If this is true, debug information will be displayed but
# may contain some private data, so it is recommended to set it to false
# in the production environment.
unsafe_show_logs: false

View File

@@ -0,0 +1,42 @@
#!/bin/bash
set -e
BINDIR="/opt/bin"
if [[ -e $BINDIR/.bootstrapped ]]; then
exit 0
fi
ARCH=$(uname -m)
case $ARCH in
"x86_64")
PYPY_ARCH=linux64
PYPI_HASH=46818cb3d74b96b34787548343d266e2562b531ddbaf330383ba930ff1930ed5
;;
"aarch64")
PYPY_ARCH=aarch64
PYPI_HASH=2e1ae193d98bc51439642a7618d521ea019f45b8fb226940f7e334c548d2b4b9
;;
*)
echo "Unsupported Architecture: ${ARCH}"
exit 1
esac
PYTHON_VERSION=3.9
PYPY_VERSION=7.3.9
PYPY_FILENAME="pypy${PYTHON_VERSION}-v${PYPY_VERSION}-${PYPY_ARCH}"
PYPI_URL="https://downloads.python.org/pypy/${PYPY_FILENAME}.tar.bz2"
mkdir -p $BINDIR
cd $BINDIR
TAR_FILE=pyp.tar.bz2
wget -O "${TAR_FILE}" "${PYPI_URL}"
echo "${PYPI_HASH} ${TAR_FILE}" | sha256sum -c -
tar -xjf "${TAR_FILE}" && rm "${TAR_FILE}"
mv -n "${PYPY_FILENAME}" pypy3
ln -s ./pypy3/bin/pypy3 python
$BINDIR/python --version
touch $BINDIR/.bootstrapped

View File

@@ -0,0 +1,4 @@
---
- name: RHEL auto-attach subscription
command: /sbin/subscription-manager attach --auto
become: true

View File

@@ -0,0 +1,6 @@
---
- name: Converge
hosts: all
gather_facts: no
roles:
- role: bootstrap-os

View File

@@ -0,0 +1,57 @@
---
dependency:
name: galaxy
lint: |
set -e
yamllint -c ../../.yamllint .
driver:
name: vagrant
provider:
name: libvirt
platforms:
- name: ubuntu16
box: generic/ubuntu1604
cpus: 1
memory: 512
- name: ubuntu18
box: generic/ubuntu1804
cpus: 1
memory: 512
- name: ubuntu20
box: generic/ubuntu2004
cpus: 1
memory: 512
- name: centos7
box: centos/7
cpus: 1
memory: 512
- name: almalinux8
box: almalinux/8
cpus: 1
memory: 512
- name: debian9
box: generic/debian9
cpus: 1
memory: 512
- name: debian10
box: generic/debian10
cpus: 1
memory: 512
provisioner:
name: ansible
config_options:
defaults:
callbacks_enabled: profile_tasks
timeout: 120
lint:
name: ansible-lint
inventory:
group_vars:
all:
user:
name: foo
comment: My test comment
verifier:
name: testinfra
lint:
name: flake8

View File

@@ -0,0 +1,11 @@
import os
import testinfra.utils.ansible_runner
testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
os.environ['MOLECULE_INVENTORY_FILE']
).get_hosts('all')
def test_python(host):
assert host.exists('python3') or host.exists('python')

View File

@@ -0,0 +1,13 @@
---
- name: Enable EPEL repo for Amazon Linux
yum_repository:
name: epel
file: epel
description: Extra Packages for Enterprise Linux 7 - $basearch
baseurl: http://download.fedoraproject.org/pub/epel/7/$basearch
gpgcheck: yes
gpgkey: http://download.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
skip_if_unavailable: yes
enabled: yes
repo_gpgcheck: no
when: epel_enabled

View File

@@ -0,0 +1,117 @@
---
- name: Gather host facts to get ansible_distribution_version ansible_distribution_major_version
setup:
gather_subset: '!all'
filter: ansible_distribution_*version
- name: Add proxy to yum.conf or dnf.conf if http_proxy is defined
ini_file:
path: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('/etc/yum.conf','/etc/dnf/dnf.conf') }}"
section: main
option: proxy
value: "{{ http_proxy | default(omit) }}"
state: "{{ http_proxy | default(False) | ternary('present', 'absent') }}"
no_extra_spaces: true
mode: 0644
become: true
when: not skip_http_proxy_on_os_packages
# For Oracle Linux install public repo
- name: Download Oracle Linux public yum repo
get_url:
url: https://yum.oracle.com/public-yum-ol7.repo
dest: /etc/yum.repos.d/public-yum-ol7.repo
when:
- use_oracle_public_repo|default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) < 7.6
environment: "{{ proxy_env }}"
- name: Enable Oracle Linux repo
ini_file:
dest: /etc/yum.repos.d/public-yum-ol7.repo
section: "{{ item }}"
option: enabled
value: "1"
mode: 0644
with_items:
- ol7_latest
- ol7_addons
- ol7_developer_EPEL
when:
- use_oracle_public_repo|default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) < 7.6
- name: Install EPEL for Oracle Linux repo package
package:
name: "oracle-epel-release-el{{ ansible_distribution_major_version }}"
state: present
when:
- use_oracle_public_repo|default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) >= 7.6
- name: Enable Oracle Linux repo
ini_file:
dest: "/etc/yum.repos.d/oracle-linux-ol{{ ansible_distribution_major_version }}.repo"
section: "ol{{ ansible_distribution_major_version }}_addons"
option: "{{ item.option }}"
value: "{{ item.value }}"
mode: 0644
with_items:
- { option: "name", value: "ol{{ ansible_distribution_major_version }}_addons" }
- { option: "enabled", value: "1" }
- { option: "baseurl", value: "http://yum.oracle.com/repo/OracleLinux/OL{{ ansible_distribution_major_version }}/addons/$basearch/" }
when:
- use_oracle_public_repo|default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) >= 7.6
- name: Enable Centos extra repo for Oracle Linux
ini_file:
dest: "/etc/yum.repos.d/centos-extras.repo"
section: "extras"
option: "{{ item.option }}"
value: "{{ item.value }}"
mode: 0644
with_items:
- { option: "name", value: "CentOS-{{ ansible_distribution_major_version }} - Extras" }
- { option: "enabled", value: "1" }
- { option: "gpgcheck", value: "0" }
- { option: "baseurl", value: "http://mirror.centos.org/centos/{{ ansible_distribution_major_version }}/extras/$basearch/{% if ansible_distribution_major_version|int > 7 %}os/{% endif %}" }
when:
- use_oracle_public_repo|default(true)
- '''ID="ol"'' in os_release.stdout_lines'
- (ansible_distribution_version | float) >= 7.6
- (ansible_distribution_version | float) < 9
# CentOS ships with python installed
- name: Check presence of fastestmirror.conf
stat:
path: /etc/yum/pluginconf.d/fastestmirror.conf
get_attributes: no
get_checksum: no
get_mime: no
register: fastestmirror
# the fastestmirror plugin can actually slow down Ansible deployments
- name: Disable fastestmirror plugin if requested
lineinfile:
dest: /etc/yum/pluginconf.d/fastestmirror.conf
regexp: "^enabled=.*"
line: "enabled=0"
state: present
become: true
when:
- fastestmirror.stat.exists
- not centos_fastestmirror_enabled
# libselinux-python is required on SELinux enabled hosts
# See https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#managed-node-requirements
- name: Install libselinux python package
package:
name: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('libselinux-python','python3-libselinux') }}"
state: present
become: true

View File

@@ -0,0 +1,16 @@
---
# ClearLinux ships with Python installed
- name: Install basic package to run containers
package:
name: containers-basic
state: present
- name: Make sure docker service is enabled
systemd:
name: docker
masked: false
enabled: true
daemon_reload: true
state: started
become: true

View File

@@ -0,0 +1,37 @@
---
# CoreOS ships without Python installed
- name: Check if bootstrap is needed
raw: stat /opt/bin/.bootstrapped
register: need_bootstrap
failed_when: false
changed_when: false
tags:
- facts
- name: Force binaries directory for Container Linux by CoreOS and Flatcar
set_fact:
bin_dir: "/opt/bin"
tags:
- facts
- name: Run bootstrap.sh
script: bootstrap.sh
become: true
environment: "{{ proxy_env }}"
when:
- need_bootstrap.rc != 0
- name: Set the ansible_python_interpreter fact
set_fact:
ansible_python_interpreter: "{{ bin_dir }}/python"
tags:
- facts
- name: Disable auto-upgrade
systemd:
name: locksmithd.service
masked: true
state: stopped
when:
- coreos_locksmithd_disable

View File

@@ -0,0 +1,76 @@
---
# Some Debian based distros ship without Python installed
- name: Check if bootstrap is needed
raw: which python3
register: need_bootstrap
failed_when: false
changed_when: false
# This command should always run, even in check mode
check_mode: false
tags:
- facts
- name: Check http::proxy in apt configuration files
raw: apt-config dump | grep -qsi 'Acquire::http::proxy'
register: need_http_proxy
failed_when: false
changed_when: false
# This command should always run, even in check mode
check_mode: false
- name: Add http_proxy to /etc/apt/apt.conf if http_proxy is defined
raw: echo 'Acquire::http::proxy "{{ http_proxy }}";' >> /etc/apt/apt.conf
become: true
when:
- http_proxy is defined
- need_http_proxy.rc != 0
- not skip_http_proxy_on_os_packages
- name: Check https::proxy in apt configuration files
raw: apt-config dump | grep -qsi 'Acquire::https::proxy'
register: need_https_proxy
failed_when: false
changed_when: false
# This command should always run, even in check mode
check_mode: false
- name: Add https_proxy to /etc/apt/apt.conf if https_proxy is defined
raw: echo 'Acquire::https::proxy "{{ https_proxy }}";' >> /etc/apt/apt.conf
become: true
when:
- https_proxy is defined
- need_https_proxy.rc != 0
- not skip_http_proxy_on_os_packages
- name: Install python3
raw:
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y python3-minimal
become: true
when:
- need_bootstrap.rc != 0
- name: Update Apt cache
raw: apt-get update --allow-releaseinfo-change
become: true
when:
- '''ID=debian'' in os_release.stdout_lines'
- '''VERSION_ID="10"'' in os_release.stdout_lines or ''VERSION_ID="11"'' in os_release.stdout_lines'
register: bootstrap_update_apt_result
changed_when:
- '"changed its" in bootstrap_update_apt_result.stdout'
- '"value from" in bootstrap_update_apt_result.stdout'
ignore_errors: true
- name: Set the ansible_python_interpreter fact
set_fact:
ansible_python_interpreter: "/usr/bin/python3"
# Workaround for https://github.com/ansible/ansible/issues/25543
- name: Install dbus for the hostname module
package:
name: dbus
state: present
use: apt
become: true

View File

@@ -0,0 +1,46 @@
---
- name: Check if bootstrap is needed
raw: which python
register: need_bootstrap
failed_when: false
changed_when: false
tags:
- facts
- name: Remove podman network cni
raw: "podman network rm podman"
become: true
ignore_errors: true # noqa ignore-errors
when: need_bootstrap.rc != 0
- name: Clean up possible pending packages on fedora coreos
raw: "export http_proxy={{ http_proxy | default('') }};rpm-ostree cleanup -p }}"
become: true
when: need_bootstrap.rc != 0
- name: Install required packages on fedora coreos
raw: "export http_proxy={{ http_proxy | default('') }};rpm-ostree install --allow-inactive {{ fedora_coreos_packages|join(' ') }}"
become: true
when: need_bootstrap.rc != 0
- name: Reboot immediately for updated ostree
raw: "nohup bash -c 'sleep 5s && shutdown -r now'"
become: true
ignore_errors: true # noqa ignore-errors
ignore_unreachable: yes
when: need_bootstrap.rc != 0
- name: Wait for the reboot to complete
wait_for_connection:
timeout: 240
connect_timeout: 20
delay: 5
sleep: 5
when: need_bootstrap.rc != 0
- name: Store the fact if this is an fedora core os host
set_fact:
is_fedora_coreos: True
tags:
- facts

View File

@@ -0,0 +1,36 @@
---
# Some Fedora based distros ship without Python installed
- name: Check if bootstrap is needed
raw: which python
register: need_bootstrap
failed_when: false
changed_when: false
tags:
- facts
- name: Add proxy to dnf.conf if http_proxy is defined
ini_file:
path: "/etc/dnf/dnf.conf"
section: main
option: proxy
value: "{{ http_proxy | default(omit) }}"
state: "{{ http_proxy | default(False) | ternary('present', 'absent') }}"
no_extra_spaces: true
mode: 0644
become: true
when: not skip_http_proxy_on_os_packages
- name: Install python3 on fedora
raw: "dnf install --assumeyes --quiet python3"
become: true
when:
- need_bootstrap.rc != 0
# libselinux-python3 is required on SELinux enabled hosts
# See https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#managed-node-requirements
- name: Install libselinux-python3
package:
name: libselinux-python3
state: present
become: true

View File

@@ -0,0 +1,37 @@
---
# Flatcar Container Linux ships without Python installed
- name: Check if bootstrap is needed
raw: stat /opt/bin/.bootstrapped
register: need_bootstrap
failed_when: false
changed_when: false
tags:
- facts
- name: Force binaries directory for Flatcar Container Linux by Kinvolk
set_fact:
bin_dir: "/opt/bin"
tags:
- facts
- name: Run bootstrap.sh
script: bootstrap.sh
become: true
environment: "{{ proxy_env }}"
when:
- need_bootstrap.rc != 0
- name: Set the ansible_python_interpreter fact
set_fact:
ansible_python_interpreter: "{{ bin_dir }}/python"
tags:
- facts
- name: Disable auto-upgrade
systemd:
name: locksmithd.service
masked: true
state: stopped
when:
- coreos_locksmithd_disable

View File

@@ -0,0 +1,85 @@
---
# OpenSUSE ships with Python installed
- name: Gather host facts to get ansible_distribution_version ansible_distribution_major_version
setup:
gather_subset: '!all'
filter: ansible_distribution_*version
- name: Check that /etc/sysconfig/proxy file exists
stat:
path: /etc/sysconfig/proxy
get_attributes: no
get_checksum: no
get_mime: no
register: stat_result
- name: Create the /etc/sysconfig/proxy empty file
file: # noqa risky-file-permissions
path: /etc/sysconfig/proxy
state: touch
when:
- http_proxy is defined or https_proxy is defined
- not stat_result.stat.exists
- name: Set the http_proxy in /etc/sysconfig/proxy
lineinfile:
path: /etc/sysconfig/proxy
regexp: '^HTTP_PROXY='
line: 'HTTP_PROXY="{{ http_proxy }}"'
become: true
when:
- http_proxy is defined
- name: Set the https_proxy in /etc/sysconfig/proxy
lineinfile:
path: /etc/sysconfig/proxy
regexp: '^HTTPS_PROXY='
line: 'HTTPS_PROXY="{{ https_proxy }}"'
become: true
when:
- https_proxy is defined
- name: Enable proxies
lineinfile:
path: /etc/sysconfig/proxy
regexp: '^PROXY_ENABLED='
line: 'PROXY_ENABLED="yes"'
become: true
when:
- http_proxy is defined or https_proxy is defined
# Required for zypper module
- name: Install python-xml
shell: zypper refresh && zypper --non-interactive install python-xml
changed_when: false
become: true
tags:
- facts
# Without this package, the get_url module fails when trying to handle https
- name: Install python-cryptography
zypper:
name: python-cryptography
state: present
update_cache: true
become: true
when:
- ansible_distribution_version is version('15.4', '<')
- name: Install python3-cryptography
zypper:
name: python3-cryptography
state: present
update_cache: true
become: true
when:
- ansible_distribution_version is version('15.4', '>=')
# Nerdctl needs some basic packages to get an environment up
- name: Install basic dependencies
zypper:
name:
- iptables
- apparmor-parser
state: present
become: true

View File

@@ -0,0 +1,121 @@
---
- name: Gather host facts to get ansible_distribution_version ansible_distribution_major_version
setup:
gather_subset: '!all'
filter: ansible_distribution_*version
- name: Add proxy to yum.conf or dnf.conf if http_proxy is defined
ini_file:
path: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('/etc/yum.conf','/etc/dnf/dnf.conf') }}"
section: main
option: proxy
value: "{{ http_proxy | default(omit) }}"
state: "{{ http_proxy | default(False) | ternary('present', 'absent') }}"
no_extra_spaces: true
mode: 0644
become: true
when: not skip_http_proxy_on_os_packages
- name: Add proxy to RHEL subscription-manager if http_proxy is defined
command: /sbin/subscription-manager config --server.proxy_hostname={{ http_proxy | regex_replace(':\d+$') }} --server.proxy_port={{ http_proxy | regex_replace('^.*:') }}
become: true
when:
- not skip_http_proxy_on_os_packages
- http_proxy is defined
- name: Check RHEL subscription-manager status
command: /sbin/subscription-manager status
register: rh_subscription_status
changed_when: "rh_subscription_status != 0"
ignore_errors: true # noqa ignore-errors
become: true
- name: RHEL subscription Organization ID/Activation Key registration
redhat_subscription:
state: present
org_id: "{{ rh_subscription_org_id }}"
activationkey: "{{ rh_subscription_activation_key }}"
auto_attach: true
force_register: true
syspurpose:
usage: "{{ rh_subscription_usage }}"
role: "{{ rh_subscription_role }}"
service_level_agreement: "{{ rh_subscription_sla }}"
sync: true
notify: RHEL auto-attach subscription
ignore_errors: true # noqa ignore-errors
become: true
when:
- rh_subscription_org_id is defined
- rh_subscription_status.changed
# this task has no_log set to prevent logging security sensitive information such as subscription passwords
- name: RHEL subscription Username/Password registration
redhat_subscription:
state: present
username: "{{ rh_subscription_username }}"
password: "{{ rh_subscription_password }}"
auto_attach: true
force_register: true
syspurpose:
usage: "{{ rh_subscription_usage }}"
role: "{{ rh_subscription_role }}"
service_level_agreement: "{{ rh_subscription_sla }}"
sync: true
notify: RHEL auto-attach subscription
ignore_errors: true # noqa ignore-errors
become: true
no_log: "{{ not (unsafe_show_logs|bool) }}"
when:
- rh_subscription_username is defined
- rh_subscription_status.changed
# container-selinux is in extras repo
- name: Enable RHEL 7 repos
rhsm_repository:
name:
- "rhel-7-server-rpms"
- "rhel-7-server-extras-rpms"
state: enabled
when:
- rhel_enable_repos | default(True) | bool
- ansible_distribution_major_version == "7"
# container-selinux is in appstream repo
- name: Enable RHEL 8 repos
rhsm_repository:
name:
- "rhel-8-for-*-baseos-rpms"
- "rhel-8-for-*-appstream-rpms"
state: enabled
when:
- rhel_enable_repos | default(True) | bool
- ansible_distribution_major_version == "8"
- name: Check presence of fastestmirror.conf
stat:
path: /etc/yum/pluginconf.d/fastestmirror.conf
get_attributes: no
get_checksum: no
get_mime: no
register: fastestmirror
# the fastestmirror plugin can actually slow down Ansible deployments
- name: Disable fastestmirror plugin if requested
lineinfile:
dest: /etc/yum/pluginconf.d/fastestmirror.conf
regexp: "^enabled=.*"
line: "enabled=0"
state: present
become: true
when:
- fastestmirror.stat.exists
- not centos_fastestmirror_enabled
# libselinux-python is required on SELinux enabled hosts
# See https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#managed-node-requirements
- name: Install libselinux python package
package:
name: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('libselinux-python','python3-libselinux') }}"
state: present
become: true

View File

@@ -0,0 +1,100 @@
---
- name: Fetch /etc/os-release
raw: cat /etc/os-release
register: os_release
changed_when: false
# This command should always run, even in check mode
check_mode: false
- include_tasks: bootstrap-centos.yml
when: '''ID="centos"'' in os_release.stdout_lines or ''ID="ol"'' in os_release.stdout_lines or ''ID="almalinux"'' in os_release.stdout_lines or ''ID="rocky"'' in os_release.stdout_lines or ''ID="kylin"'' in os_release.stdout_lines or ''ID="uos"'' in os_release.stdout_lines or ''ID="openEuler"'' in os_release.stdout_lines'
- include_tasks: bootstrap-amazon.yml
when: '''ID="amzn"'' in os_release.stdout_lines'
- include_tasks: bootstrap-redhat.yml
when: '''ID="rhel"'' in os_release.stdout_lines'
- include_tasks: bootstrap-clearlinux.yml
when: '''ID=clear-linux-os'' in os_release.stdout_lines'
# Fedora CoreOS
- include_tasks: bootstrap-fedora-coreos.yml
when:
- '''ID=fedora'' in os_release.stdout_lines'
- '''VARIANT_ID=coreos'' in os_release.stdout_lines'
- include_tasks: bootstrap-flatcar.yml
when: '''ID=flatcar'' in os_release.stdout_lines'
- include_tasks: bootstrap-debian.yml
when: '''ID=debian'' in os_release.stdout_lines or ''ID=ubuntu'' in os_release.stdout_lines'
# Fedora "classic"
- include_tasks: bootstrap-fedora.yml
when:
- '''ID=fedora'' in os_release.stdout_lines'
- '''VARIANT_ID=coreos'' not in os_release.stdout_lines'
- include_tasks: bootstrap-opensuse.yml
when: '''ID="opensuse-leap"'' in os_release.stdout_lines or ''ID="opensuse-tumbleweed"'' in os_release.stdout_lines'
- name: Create remote_tmp for it is used by another module
file:
path: "{{ ansible_remote_tmp | default('~/.ansible/tmp') }}"
state: directory
mode: 0700
# Workaround for https://github.com/ansible/ansible/issues/42726
# (1/3)
- name: Gather host facts to get ansible_os_family
setup:
gather_subset: '!all'
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux, non-Fedora)
hostname:
name: "{{ inventory_hostname }}"
when:
- override_system_hostname
- ansible_os_family not in ['Suse', 'Flatcar', 'Flatcar Container Linux by Kinvolk', 'ClearLinux']
- not ansible_distribution == "Fedora"
- not is_fedora_coreos
# (2/3)
- name: Assign inventory name to unconfigured hostnames (CoreOS, Flatcar, Suse, ClearLinux and Fedora only)
command: "hostnamectl set-hostname {{ inventory_hostname }}"
register: hostname_changed
become: true
changed_when: false
when: >
override_system_hostname
and (ansible_os_family in ['Suse', 'Flatcar', 'Flatcar Container Linux by Kinvolk', 'ClearLinux']
or is_fedora_coreos
or ansible_distribution == "Fedora")
# (3/3)
- name: Update hostname fact (CoreOS, Flatcar, Suse, ClearLinux and Fedora only)
setup:
gather_subset: '!all'
filter: ansible_hostname
when: >
override_system_hostname
and (ansible_os_family in ['Suse', 'Flatcar', 'Flatcar Container Linux by Kinvolk', 'ClearLinux']
or is_fedora_coreos
or ansible_distribution == "Fedora")
- name: Install ceph-commmon package
package:
name:
- ceph-common
state: present
when: rbd_provisioner_enabled|default(false)
- name: Ensure bash_completion.d folder exists
file:
name: /etc/bash_completion.d/
state: directory
owner: root
group: root
mode: 0755

View File

@@ -0,0 +1,50 @@
# Ansible Role: GlusterFS
[![Build Status](https://travis-ci.org/geerlingguy/ansible-role-glusterfs.svg?branch=master)](https://travis-ci.org/geerlingguy/ansible-role-glusterfs)
Installs and configures GlusterFS on Linux.
## Requirements
For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role).
This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/ansible/latest/collections/gluster/gluster/gluster_volume_module.html) module to ease the management of Gluster volumes.
## Role Variables
Available variables are listed below, along with default values (see `defaults/main.yml`):
```yaml
glusterfs_default_release: ""
```
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
```yaml
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
```
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/) for more info.
## Dependencies
None.
## Example Playbook
```yaml
- hosts: server
roles:
- geerlingguy.glusterfs
```
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).
## License
MIT / BSD
## Author Information
This role was created in 2015 by [Jeff Geerling](http://www.jeffgeerling.com/), author of [Ansible for DevOps](https://www.ansiblefordevops.com/).

View File

@@ -0,0 +1,11 @@
---
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "4.1"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
gluster_brick_name: gluster

View File

@@ -0,0 +1,30 @@
---
dependencies: []
galaxy_info:
author: geerlingguy
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- 6
- 7
- name: Ubuntu
versions:
- precise
- trusty
- xenial
- name: Debian
versions:
- wheezy
- jessie
galaxy_tags:
- system
- networking
- cloud
- clustering
- files
- sharing

View File

@@ -0,0 +1,16 @@
---
# This is meant for Ubuntu and RedHat installations, where apparently the glusterfs-client is not used from inside
# hyperkube and needs to be installed as part of the system.
# Setup/install tasks.
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
- include: setup-Debian.yml
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
- name: Ensure Gluster mount directories exist.
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

View File

@@ -0,0 +1,24 @@
---
- name: Add PPA for GlusterFS.
apt_repository:
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
state: present
update_cache: yes
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
with_items:
- glusterfs-client
when: glusterfs_ppa_added.changed
- name: Ensure GlusterFS client is installed.
apt:
name: "{{ item }}"
state: present
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-client

View File

@@ -0,0 +1,10 @@
---
- name: Install Prerequisites
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package: name={{ item }} state=present
with_items:
- glusterfs-client

View File

@@ -0,0 +1,13 @@
---
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.12"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
gluster_brick_name: gluster
# Default device to mount for xfs formatting, terraform overrides this by setting the variable in the inventory.
disk_volume_device_1: /dev/vdb

View File

@@ -0,0 +1,30 @@
---
dependencies: []
galaxy_info:
author: geerlingguy
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- 6
- 7
- name: Ubuntu
versions:
- precise
- trusty
- xenial
- name: Debian
versions:
- wheezy
- jessie
galaxy_tags:
- system
- networking
- cloud
- clustering
- files
- sharing

View File

@@ -0,0 +1,94 @@
---
# Include variables and define needed variables.
- name: Include OS-specific variables.
include_vars: "{{ ansible_os_family }}.yml"
# Install xfs package
- name: install xfs Debian
apt: name=xfsprogs state=present
when: ansible_os_family == "Debian"
- name: install xfs RedHat
package: name=xfsprogs state=present
when: ansible_os_family == "RedHat"
# Format external volumes in xfs
- name: Format volumes in xfs
filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}"
# Mount external volumes
- name: mounting new xfs filesystem
mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted"
# Setup/install tasks.
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat'
- include: setup-Debian.yml
when: ansible_os_family == 'Debian'
- name: Ensure GlusterFS is started and enabled at boot.
service: "name={{ glusterfs_daemon }} state=started enabled=yes"
- name: Ensure Gluster brick and mount directories exist.
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume with replicas
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length > 1
- name: Configure Gluster volume without replicas
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
when: groups['gfs-cluster']|length <= 1
- name: Mount glusterfs to retrieve disk size
mount:
name: "{{ gluster_mount_dir }}"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
fstype: glusterfs
opts: "defaults,_netdev"
state: mounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Get Gluster disk size
setup: filter=ansible_mounts
register: mounts_data
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Set Gluster disk size to variable
set_fact:
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Create file on GlusterFS
template:
dest: "{{ gluster_mount_dir }}/.test-file.txt"
src: test-file.txt
mode: 0644
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs
mount:
name: "{{ gluster_mount_dir }}"
fstype: glusterfs
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
state: unmounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]

View File

@@ -0,0 +1,26 @@
---
- name: Add PPA for GlusterFS.
apt_repository:
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
state: present
update_cache: yes
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added. # noqa 503
apt:
name: "{{ item }}"
state: absent
with_items:
- glusterfs-server
- glusterfs-client
when: glusterfs_ppa_added.changed
- name: Ensure GlusterFS is installed.
apt:
name: "{{ item }}"
state: present
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-server
- glusterfs-client

View File

@@ -0,0 +1,11 @@
---
- name: Install Prerequisites
package: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
package: name={{ item }} state=present
with_items:
- glusterfs-server
- glusterfs-client

View File

@@ -0,0 +1,2 @@
---
glusterfs_daemon: glusterd

View File

@@ -0,0 +1,2 @@
---
glusterfs_daemon: glusterd

View File

@@ -0,0 +1,23 @@
---
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
template:
src: "{{ item.file }}"
dest: "{{ kube_config_dir }}/{{ item.dest }}"
mode: 0644
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
- { file: glusterfs-kubernetes-endpoint-svc.json.j2, type: svc, dest: glusterfs-kubernetes-endpoint-svc.json}
register: gluster_pv
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
- name: Kubernetes Apps | Set GlusterFS endpoint and PV
kube:
name: glusterfs
namespace: default
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.item.type }}"
filename: "{{ kube_config_dir }}/{{ item.item.dest }}"
state: "{{ item.changed | ternary('latest','present') }}"
with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube_control_plane'][0] and groups['gfs-cluster'] is defined

View File

@@ -0,0 +1,12 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"spec": {
"ports": [
{"port": 1}
]
}
}

View File

@@ -0,0 +1,24 @@
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"subsets": [
{% for host in groups['gfs-cluster'] %}
{
"addresses": [
{
"ip": "{{hostvars[host]['ip']|default(hostvars[host].ansible_default_ipv4['address'])}}"
}
],
"ports": [
{
"port": 1
}
]
}{%- if not loop.last %}, {% endif -%}
{% endfor %}
]
}

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfs
spec:
capacity:
storage: "{{ hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb }}Gi"
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs
path: gluster
readOnly: false
persistentVolumeReclaimPolicy: Retain

View File

@@ -0,0 +1,3 @@
---
dependencies:
- {role: kubernetes-pv/ansible, tags: apps}

View File

@@ -0,0 +1,27 @@
# Deploy Heketi/Glusterfs into Kubespray/Kubernetes
This playbook aims to automate [this](https://github.com/heketi/heketi/blob/master/docs/admin/install-kubernetes.md) tutorial. It deploys heketi/glusterfs into kubernetes and sets up a storageclass.
## Important notice
> Due to resource limits on the current project maintainers and general lack of contributions we are considering placing Heketi into a [near-maintenance mode](https://github.com/heketi/heketi#important-notice)
## Client Setup
Heketi provides a CLI that provides users with a means to administer the deployment and configuration of GlusterFS in Kubernetes. [Download and install the heketi-cli](https://github.com/heketi/heketi/releases) on your client machine.
## Install
Copy the inventory.yml.sample over to inventory/sample/k8s_heketi_inventory.yml and change it according to your setup.
```shell
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi.yml
```
## Tear down
```shell
ansible-playbook --ask-become -i inventory/sample/k8s_heketi_inventory.yml contrib/network-storage/heketi/heketi-tear-down.yml
```
Add `--extra-vars "heketi_remove_lvm=true"` to the command above to remove LVM packages from the system

View File

@@ -0,0 +1,9 @@
---
- hosts: kube_control_plane[0]
roles:
- { role: tear-down }
- hosts: heketi-node
become: yes
roles:
- { role: tear-down-disks }

View File

@@ -0,0 +1,10 @@
---
- hosts: heketi-node
roles:
- { role: prepare }
- hosts: kube_control_plane[0]
tags:
- "provision"
roles:
- { role: provision }

View File

@@ -0,0 +1,33 @@
all:
vars:
heketi_admin_key: "11elfeinhundertundelf"
heketi_user_key: "!!einseinseins"
glusterfs_daemonset:
readiness_probe:
timeout_seconds: 3
initial_delay_seconds: 3
liveness_probe:
timeout_seconds: 3
initial_delay_seconds: 10
children:
k8s_cluster:
vars:
kubelet_fail_swap_on: false
children:
kube_control_plane:
hosts:
node1:
etcd:
hosts:
node2:
kube_node:
hosts: &kube_nodes
node1:
node2:
node3:
node4:
heketi-node:
vars:
disk_volume_device_1: "/dev/vdb"
hosts:
<<: *kube_nodes

View File

@@ -0,0 +1 @@
jmespath

View File

@@ -0,0 +1,24 @@
---
- name: "Load lvm kernel modules"
become: true
with_items:
- "dm_snapshot"
- "dm_mirror"
- "dm_thin_pool"
modprobe:
name: "{{ item }}"
state: "present"
- name: "Install glusterfs mount utils (RedHat)"
become: true
package:
name: "glusterfs-fuse"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install glusterfs mount utils (Debian)"
become: true
apt:
name: "glusterfs-client"
state: "present"
when: "ansible_os_family == 'Debian'"

View File

@@ -0,0 +1,3 @@
---
- name: "stop port forwarding"
command: "killall "

View File

@@ -0,0 +1,64 @@
---
# Bootstrap heketi
- name: "Get state of heketi service, deployment and pods."
register: "initial_heketi_state"
changed_when: false
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
- name: "Bootstrap heketi."
when:
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Service']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Deployment']\"))|length == 0"
- "(initial_heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod']\"))|length == 0"
include_tasks: "bootstrap/deploy.yml"
# Prepare heketi topology
- name: "Get heketi initial pod state."
register: "initial_heketi_pod"
command: "{{ bin_dir }}/kubectl get pods --selector=deploy-heketi=pod,glusterfs=heketi-pod,name=deploy-heketi --output=json"
changed_when: false
- name: "Ensure heketi bootstrap pod is up."
assert:
that: "(initial_heketi_pod.stdout|from_json|json_query('items[*]'))|length == 1"
- name: Store the initial heketi pod name
set_fact:
initial_heketi_pod_name: "{{ initial_heketi_pod.stdout|from_json|json_query(\"items[*].metadata.name|[0]\") }}"
- name: "Test heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Load heketi topology."
when: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*]\")|flatten|length == 0"
include_tasks: "bootstrap/topology.yml"
# Provision heketi database volume
- name: "Prepare heketi volumes."
include_tasks: "bootstrap/volumes.yml"
# Remove bootstrap heketi
- name: "Tear down bootstrap."
include_tasks: "bootstrap/tear-down.yml"
# Prepare heketi storage
- name: "Test heketi storage."
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false
register: "heketi_storage_state"
# ensure endpoints actually exist before trying to move database data to it
- name: "Create heketi storage."
include_tasks: "bootstrap/storage.yml"
vars:
secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"

View File

@@ -0,0 +1,27 @@
---
- name: "Kubernetes Apps | Lay Down Heketi Bootstrap"
become: true
template:
src: "heketi-bootstrap.json.j2"
dest: "{{ kube_config_dir }}/heketi-bootstrap.json"
mode: 0640
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Bootstrap"
kube:
name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-bootstrap.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Wait for heketi bootstrap to complete."
changed_when: false
register: "initial_heketi_state"
vars:
initial_heketi_state: { stdout: "{}" }
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get services,deployments,pods --selector=deploy-heketi --output=json"
until:
- "initial_heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "initial_heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5

View File

@@ -0,0 +1,33 @@
---
- name: "Test heketi storage."
command: "{{ bin_dir }}/kubectl get secrets,endpoints,services,jobs --output=json"
changed_when: false
register: "heketi_storage_state"
- name: "Create heketi storage."
kube:
name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage-bootstrap.json"
state: "present"
vars:
secret_query: "items[?metadata.name=='heketi-storage-secret' && kind=='Secret']"
endpoints_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Endpoints']"
service_query: "items[?metadata.name=='heketi-storage-endpoints' && kind=='Service']"
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job']"
when:
- "heketi_storage_state.stdout|from_json|json_query(secret_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(endpoints_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(service_query)|length == 0"
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 0"
register: "heketi_storage_result"
- name: "Get state of heketi database copy job."
command: "{{ bin_dir }}/kubectl get jobs --output=json"
changed_when: false
register: "heketi_storage_state"
vars:
heketi_storage_state: { stdout: "{}" }
job_query: "items[?metadata.name=='heketi-storage-copy-job' && kind=='Job' && status.succeeded==1]"
until:
- "heketi_storage_state.stdout|from_json|json_query(job_query)|length == 1"
retries: 60
delay: 5

View File

@@ -0,0 +1,14 @@
---
- name: "Get existing Heketi deploy resources."
command: "{{ bin_dir }}/kubectl get all --selector=\"deploy-heketi\" -o=json"
register: "heketi_resources"
changed_when: false
- name: "Delete bootstrap Heketi."
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"deploy-heketi\""
when: "heketi_resources.stdout|from_json|json_query('items[*]')|length > 0"
- name: "Ensure there is nothing left over." # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"deploy-heketi\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5

View File

@@ -0,0 +1,27 @@
---
- name: "Get heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Render heketi topology template."
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
register: "render"
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container."
changed_when: false
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ initial_heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa 503
when: "render.changed"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
register: "load_heketi"
- name: "Get heketi topology."
changed_when: false
register: "heketi_topology"
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -0,0 +1,41 @@
---
- name: "Get heketi volume ids."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
changed_when: false
register: "heketi_volumes"
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_exists: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Provision database volume."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} setup-openshift-heketi-storage"
when: "heketi_database_volume_exists is undefined"
- name: "Copy configuration from pod." # noqa 301
become: true
command: "{{ bin_dir }}/kubectl cp {{ initial_heketi_pod_name }}:/heketi-storage.json {{ kube_config_dir }}/heketi-storage-bootstrap.json"
- name: "Get heketi volume ids."
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume list --json"
changed_when: false
register: "heketi_volumes"
- name: "Get heketi volumes."
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ initial_heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} volume info {{ volume_id }} --json"
with_items: "{{ heketi_volumes.stdout|from_json|json_query(\"volumes[*]\") }}"
loop_control: { loop_var: "volume_id" }
register: "volumes_information"
- name: "Test heketi database volume."
set_fact: { heketi_database_volume_created: true }
with_items: "{{ volumes_information.results }}"
loop_control: { loop_var: "volume_information" }
vars: { volume: "{{ volume_information.stdout|from_json }}" }
when: "volume.name == 'heketidbstorage'"
- name: "Ensure heketi database volume exists."
assert: { that: "heketi_database_volume_created is defined", msg: "Heketi database volume does not exist." }

View File

@@ -0,0 +1,4 @@
---
- name: "Clean up left over jobs."
command: "{{ bin_dir }}/kubectl delete jobs,pods --selector=\"deploy-heketi\""
changed_when: false

View File

@@ -0,0 +1,44 @@
---
- name: "Kubernetes Apps | Lay Down GlusterFS Daemonset"
template:
src: "glusterfs-daemonset.json.j2"
dest: "{{ kube_config_dir }}/glusterfs-daemonset.json"
mode: 0644
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure GlusterFS daemonset"
kube:
name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/glusterfs-daemonset.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Kubernetes Apps | Label GlusterFS nodes"
include_tasks: "glusterfs/label.yml"
with_items: "{{ groups['heketi-node'] }}"
loop_control:
loop_var: "node"
- name: "Kubernetes Apps | Wait for daemonset to become available."
register: "daemonset_state"
command: "{{ bin_dir }}/kubectl get daemonset glusterfs --output=json --ignore-not-found=true"
changed_when: false
vars:
daemonset_state: { stdout: "{}" }
ready: "{{ daemonset_state.stdout|from_json|json_query(\"status.numberReady\") }}"
desired: "{{ daemonset_state.stdout|from_json|json_query(\"status.desiredNumberScheduled\") }}"
until: "ready | int >= 3"
retries: 60
delay: 5
- name: "Kubernetes Apps | Lay Down Heketi Service Account"
template:
src: "heketi-service-account.json.j2"
dest: "{{ kube_config_dir }}/heketi-service-account.json"
mode: 0644
become: true
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Service Account"
kube:
name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-service-account.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,19 @@
---
- name: Get storage nodes
register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- name: "Assign storage label"
when: "label_present.stdout_lines|length == 0"
command: "{{ bin_dir }}/kubectl label node {{ node }} storagenode=glusterfs"
- name: Get storage nodes again
register: "label_present"
command: "{{ bin_dir }}/kubectl get node --selector=storagenode=glusterfs,kubernetes.io/hostname={{ node }} --ignore-not-found=true"
changed_when: false
- name: Ensure the label has been set
assert:
that: "label_present|length > 0"
msg: "Node {{ node }} has not been assigned with label storagenode=glusterfs."

View File

@@ -0,0 +1,34 @@
---
- name: "Kubernetes Apps | Lay Down Heketi"
become: true
template:
src: "heketi-deployment.json.j2"
dest: "{{ kube_config_dir }}/heketi-deployment.json"
mode: 0644
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi"
kube:
name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-deployment.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"
- name: "Ensure heketi is up and running."
changed_when: false
register: "heketi_state"
vars:
heketi_state:
stdout: "{}"
pods_query: "items[?kind=='Pod'].status.conditions|[0][?type=='Ready'].status|[0]"
deployments_query: "items[?kind=='Deployment'].status.conditions|[0][?type=='Available'].status|[0]"
command: "{{ bin_dir }}/kubectl get deployments,pods --selector=glusterfs --output=json"
until:
- "heketi_state.stdout|from_json|json_query(pods_query) == 'True'"
- "heketi_state.stdout|from_json|json_query(deployments_query) == 'True'"
retries: 60
delay: 5
- name: Set the Heketi pod name
set_fact:
heketi_pod_name: "{{ heketi_state.stdout|from_json|json_query(\"items[?kind=='Pod'].metadata.name|[0]\") }}"

View File

@@ -0,0 +1,30 @@
---
- name: "Kubernetes Apps | GlusterFS"
include_tasks: "glusterfs.yml"
- name: "Kubernetes Apps | Heketi Secrets"
include_tasks: "secret.yml"
- name: "Kubernetes Apps | Test Heketi"
register: "heketi_service_state"
command: "{{ bin_dir }}/kubectl get service heketi-storage-endpoints -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Bootstrap Heketi"
when: "heketi_service_state.stdout == \"\""
include_tasks: "bootstrap.yml"
- name: "Kubernetes Apps | Heketi"
include_tasks: "heketi.yml"
- name: "Kubernetes Apps | Heketi Topology"
include_tasks: "topology.yml"
- name: "Kubernetes Apps | Heketi Storage"
include_tasks: "storage.yml"
- name: "Kubernetes Apps | Storage Class"
include_tasks: "storageclass.yml"
- name: "Clean up"
include_tasks: "cleanup.yml"

View File

@@ -0,0 +1,45 @@
---
- name: Get clusterrolebindings
register: "clusterrolebinding_state"
command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- name: "Kubernetes Apps | Deploy cluster role binding."
when: "clusterrolebinding_state.stdout | length == 0"
command: "{{ bin_dir }}/kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account"
- name: Get clusterrolebindings again
register: "clusterrolebinding_state"
command: "{{ bin_dir }}/kubectl get clusterrolebinding heketi-gluster-admin -o=name --ignore-not-found=true"
changed_when: false
- name: Make sure that clusterrolebindings are present now
assert:
that: "clusterrolebinding_state.stdout | length > 0"
msg: "Cluster role binding is not present."
- name: Get the heketi-config-secret secret
register: "secret_state"
command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- name: "Render Heketi secret configuration."
become: true
template:
src: "heketi.json.j2"
dest: "{{ kube_config_dir }}/heketi.json"
mode: 0644
- name: "Deploy Heketi config secret"
when: "secret_state.stdout | length == 0"
command: "{{ bin_dir }}/kubectl create secret generic heketi-config-secret --from-file={{ kube_config_dir }}/heketi.json"
- name: Get the heketi-config-secret secret again
register: "secret_state"
command: "{{ bin_dir }}/kubectl get secret heketi-config-secret -o=name --ignore-not-found=true"
changed_when: false
- name: Make sure the heketi-config-secret secret exists now
assert:
that: "secret_state.stdout | length > 0"
msg: "Heketi config secret is not present."

View File

@@ -0,0 +1,15 @@
---
- name: "Kubernetes Apps | Lay Down Heketi Storage"
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
template:
src: "heketi-storage.json.j2"
dest: "{{ kube_config_dir }}/heketi-storage.json"
mode: 0644
register: "rendering"
- name: "Kubernetes Apps | Install and configure Heketi Storage"
kube:
name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/heketi-storage.json"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,26 @@
---
- name: "Test storage class."
command: "{{ bin_dir }}/kubectl get storageclass gluster --ignore-not-found=true --output=json"
register: "storageclass"
changed_when: false
- name: "Test heketi service."
command: "{{ bin_dir }}/kubectl get service heketi --ignore-not-found=true --output=json"
register: "heketi_service"
changed_when: false
- name: "Ensure heketi service is available."
assert: { that: "heketi_service.stdout != \"\"" }
- name: "Render storage class configuration."
become: true
vars:
endpoint_address: "{{ (heketi_service.stdout|from_json).spec.clusterIP }}"
template:
src: "storageclass.yml.j2"
dest: "{{ kube_config_dir }}/storageclass.yml"
mode: 0644
register: "rendering"
- name: "Kubernetes Apps | Install and configure Storace Class"
kube:
name: "GlusterFS"
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/storageclass.yml"
state: "{{ rendering.changed | ternary('latest', 'present') }}"

View File

@@ -0,0 +1,26 @@
---
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
- name: "Render heketi topology template."
become: true
vars: { nodes: "{{ groups['heketi-node'] }}" }
register: "rendering"
template:
src: "topology.json.j2"
dest: "{{ kube_config_dir }}/topology.json"
mode: 0644
- name: "Copy topology configuration into container." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl cp {{ kube_config_dir }}/topology.json {{ heketi_pod_name }}:/tmp/topology.json"
- name: "Load heketi topology." # noqa 503
when: "rendering.changed"
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology load --json=/tmp/topology.json"
- name: "Get heketi topology."
register: "heketi_topology"
changed_when: false
command: "{{ bin_dir }}/kubectl exec {{ heketi_pod_name }} -- heketi-cli --user admin --secret {{ heketi_admin_key }} topology info --json"
until: "heketi_topology.stdout|from_json|json_query(\"clusters[*].nodes[*].devices[?state=='online'].id\")|flatten|length == groups['heketi-node']|length"
retries: 60
delay: 5

View File

@@ -0,0 +1,149 @@
{
"kind": "DaemonSet",
"apiVersion": "apps/v1",
"metadata": {
"name": "glusterfs",
"labels": {
"glusterfs": "deployment"
},
"annotations": {
"description": "GlusterFS Daemon Set",
"tags": "glusterfs"
}
},
"spec": {
"selector": {
"matchLabels": {
"glusterfs-node": "daemonset"
}
},
"template": {
"metadata": {
"name": "glusterfs",
"labels": {
"glusterfs-node": "daemonset"
}
},
"spec": {
"nodeSelector": {
"storagenode" : "glusterfs"
},
"hostNetwork": true,
"containers": [
{
"image": "gluster/gluster-centos:gluster4u0_centos7",
"imagePullPolicy": "IfNotPresent",
"name": "glusterfs",
"volumeMounts": [
{
"name": "glusterfs-heketi",
"mountPath": "/var/lib/heketi"
},
{
"name": "glusterfs-run",
"mountPath": "/run"
},
{
"name": "glusterfs-lvm",
"mountPath": "/run/lvm"
},
{
"name": "glusterfs-etc",
"mountPath": "/etc/glusterfs"
},
{
"name": "glusterfs-logs",
"mountPath": "/var/log/glusterfs"
},
{
"name": "glusterfs-config",
"mountPath": "/var/lib/glusterd"
},
{
"name": "glusterfs-dev",
"mountPath": "/dev"
},
{
"name": "glusterfs-cgroup",
"mountPath": "/sys/fs/cgroup"
}
],
"securityContext": {
"capabilities": {},
"privileged": true
},
"readinessProbe": {
"timeoutSeconds": {{ glusterfs_daemonset.readiness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.readiness_probe.initial_delay_seconds }},
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
},
"livenessProbe": {
"timeoutSeconds": {{ glusterfs_daemonset.liveness_probe.timeout_seconds }},
"initialDelaySeconds": {{ glusterfs_daemonset.liveness_probe.initial_delay_seconds }},
"exec": {
"command": [
"/bin/bash",
"-c",
"systemctl status glusterd.service"
]
}
}
}
],
"volumes": [
{
"name": "glusterfs-heketi",
"hostPath": {
"path": "/var/lib/heketi"
}
},
{
"name": "glusterfs-run"
},
{
"name": "glusterfs-lvm",
"hostPath": {
"path": "/run/lvm"
}
},
{
"name": "glusterfs-etc",
"hostPath": {
"path": "/etc/glusterfs"
}
},
{
"name": "glusterfs-logs",
"hostPath": {
"path": "/var/log/glusterfs"
}
},
{
"name": "glusterfs-config",
"hostPath": {
"path": "/var/lib/glusterd"
}
},
{
"name": "glusterfs-dev",
"hostPath": {
"path": "/dev"
}
},
{
"name": "glusterfs-cgroup",
"hostPath": {
"path": "/sys/fs/cgroup"
}
}
]
}
}
}
}

View File

@@ -0,0 +1,138 @@
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "deploy-heketi",
"labels": {
"glusterfs": "heketi-service",
"deploy-heketi": "support"
},
"annotations": {
"description": "Exposes Heketi Service"
}
},
"spec": {
"selector": {
"name": "deploy-heketi"
},
"ports": [
{
"name": "deploy-heketi",
"port": 8080,
"targetPort": 8080
}
]
}
},
{
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": {
"name": "deploy-heketi",
"labels": {
"glusterfs": "heketi-deployment",
"deploy-heketi": "deployment"
},
"annotations": {
"description": "Defines how to deploy Heketi"
}
},
"spec": {
"selector": {
"matchLabels": {
"name": "deploy-heketi"
}
},
"replicas": 1,
"template": {
"metadata": {
"name": "deploy-heketi",
"labels": {
"name": "deploy-heketi",
"glusterfs": "heketi-pod",
"deploy-heketi": "pod"
}
},
"spec": {
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:9",
"imagePullPolicy": "Always",
"name": "deploy-heketi",
"env": [
{
"name": "HEKETI_EXECUTOR",
"value": "kubernetes"
},
{
"name": "HEKETI_DB_PATH",
"value": "/var/lib/heketi/heketi.db"
},
{
"name": "HEKETI_FSTAB",
"value": "/var/lib/heketi/fstab"
},
{
"name": "HEKETI_SNAPSHOT_LIMIT",
"value": "14"
},
{
"name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
"value": "y"
}
],
"ports": [
{
"containerPort": 8080
}
],
"volumeMounts": [
{
"name": "db",
"mountPath": "/var/lib/heketi"
},
{
"name": "config",
"mountPath": "/etc/heketi"
}
],
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/hello",
"port": 8080
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 10,
"httpGet": {
"path": "/hello",
"port": 8080
}
}
}
],
"volumes": [
{
"name": "db"
},
{
"name": "config",
"secret": {
"secretName": "heketi-config-secret"
}
}
]
}
}
}
}
]
}

View File

@@ -0,0 +1,164 @@
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "heketi-db-backup",
"labels": {
"glusterfs": "heketi-db",
"heketi": "db"
}
},
"data": {
},
"type": "Opaque"
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-service",
"deploy-heketi": "support"
},
"annotations": {
"description": "Exposes Heketi Service"
}
},
"spec": {
"selector": {
"name": "heketi"
},
"ports": [
{
"name": "heketi",
"port": 8080,
"targetPort": 8080
}
]
}
},
{
"kind": "Deployment",
"apiVersion": "apps/v1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-deployment"
},
"annotations": {
"description": "Defines how to deploy Heketi"
}
},
"spec": {
"selector": {
"matchLabels": {
"name": "heketi"
}
},
"replicas": 1,
"template": {
"metadata": {
"name": "heketi",
"labels": {
"name": "heketi",
"glusterfs": "heketi-pod"
}
},
"spec": {
"serviceAccountName": "heketi-service-account",
"containers": [
{
"image": "heketi/heketi:9",
"imagePullPolicy": "Always",
"name": "heketi",
"env": [
{
"name": "HEKETI_EXECUTOR",
"value": "kubernetes"
},
{
"name": "HEKETI_DB_PATH",
"value": "/var/lib/heketi/heketi.db"
},
{
"name": "HEKETI_FSTAB",
"value": "/var/lib/heketi/fstab"
},
{
"name": "HEKETI_SNAPSHOT_LIMIT",
"value": "14"
},
{
"name": "HEKETI_KUBE_GLUSTER_DAEMONSET",
"value": "y"
}
],
"ports": [
{
"containerPort": 8080
}
],
"volumeMounts": [
{
"mountPath": "/backupdb",
"name": "heketi-db-secret"
},
{
"name": "db",
"mountPath": "/var/lib/heketi"
},
{
"name": "config",
"mountPath": "/etc/heketi"
}
],
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/hello",
"port": 8080
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 10,
"httpGet": {
"path": "/hello",
"port": 8080
}
}
}
],
"volumes": [
{
"name": "db",
"glusterfs": {
"endpoints": "heketi-storage-endpoints",
"path": "heketidbstorage"
}
},
{
"name": "heketi-db-secret",
"secret": {
"secretName": "heketi-db-backup"
}
},
{
"name": "config",
"secret": {
"secretName": "heketi-config-secret"
}
}
]
}
}
}
}
]
}

View File

@@ -0,0 +1,7 @@
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "heketi-service-account"
}
}

View File

@@ -0,0 +1,54 @@
{
"apiVersion": "v1",
"kind": "List",
"items": [
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "heketi-storage-endpoints",
"creationTimestamp": null
},
"subsets": [
{% set nodeblocks = [] %}
{% for node in nodes %}
{% set nodeblock %}
{
"addresses": [
{
"ip": "{{ hostvars[node].ip }}"
}
],
"ports": [
{
"port": 1
}
]
}
{% endset %}
{% if nodeblocks.append(nodeblock) %}{% endif %}
{% endfor %}
{{ nodeblocks|join(',') }}
]
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heketi-storage-endpoints",
"creationTimestamp": null
},
"spec": {
"ports": [
{
"port": 1,
"targetPort": 0
}
]
},
"status": {
"loadBalancer": {}
}
}
]
}

View File

@@ -0,0 +1,44 @@
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": true,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "{{ heketi_admin_key }}"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "{{ heketi_user_key }}"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
"executor": "kubernetes",
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"kubeexec": {
"rebalance_on_expansion": true
},
"sshexec": {
"rebalance_on_expansion": true,
"keyfile": "/etc/heketi/private_key",
"fstab": "/etc/fstab",
"port": "22",
"user": "root",
"sudo": false
}
},
"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
"backup_db_to_kube_secret": false
}

View File

@@ -0,0 +1,12 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://{{ endpoint_address }}:8080"
restuser: "admin"
restuserkey: "{{ heketi_admin_key }}"

View File

@@ -0,0 +1,34 @@
{
"clusters": [
{
"nodes": [
{% set nodeblocks = [] %}
{% for node in nodes %}
{% set nodeblock %}
{
"node": {
"hostnames": {
"manage": [
"{{ node }}"
],
"storage": [
"{{ hostvars[node].ip }}"
]
},
"zone": 1
},
"devices": [
{
"name": "{{ hostvars[node]['disk_volume_device_1'] }}",
"destroydata": false
}
]
}
{% endset %}
{% if nodeblocks.append(nodeblock) %}{% endif %}
{% endfor %}
{{ nodeblocks|join(',') }}
]
}
]
}

View File

@@ -0,0 +1,2 @@
---
heketi_remove_lvm: false

View File

@@ -0,0 +1,52 @@
---
- name: "Install lvm utils (RedHat)"
become: true
package:
name: "lvm2"
state: "present"
when: "ansible_os_family == 'RedHat'"
- name: "Install lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "present"
when: "ansible_os_family == 'Debian'"
- name: "Get volume group information."
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
shell: "pvs {{ disk_volume_device_1 }} --option vg_name | tail -n+2"
register: "volume_groups"
ignore_errors: true # noqa ignore-errors
changed_when: false
- name: "Remove volume groups." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
command: "vgremove {{ volume_group }} --yes"
with_items: "{{ volume_groups.stdout_lines }}"
loop_control: { loop_var: "volume_group" }
- name: "Remove physical volume from cluster disks." # noqa 301
environment:
PATH: "{{ ansible_env.PATH }}:/sbin" # Make sure we can workaround RH / CentOS conservative path management
become: true
command: "pvremove {{ disk_volume_device_1 }} --yes"
ignore_errors: true # noqa ignore-errors
- name: "Remove lvm utils (RedHat)"
become: true
package:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'RedHat' and heketi_remove_lvm"
- name: "Remove lvm utils (Debian)"
become: true
apt:
name: "lvm2"
state: "absent"
when: "ansible_os_family == 'Debian' and heketi_remove_lvm"

View File

@@ -0,0 +1,51 @@
---
- name: Remove storage class. # noqa 301
command: "{{ bin_dir }}/kubectl delete storageclass gluster"
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\""
ignore_errors: true # noqa ignore-errors
- name: Tear down heketi. # noqa 301
command: "{{ bin_dir }}/kubectl delete all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\""
ignore_errors: true # noqa ignore-errors
- name: Tear down bootstrap.
include_tasks: "../../provision/tasks/bootstrap/tear-down.yml"
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-pod\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: Ensure there is nothing left over. # noqa 301
command: "{{ bin_dir }}/kubectl get all,service,jobs,deployment,secret --selector=\"glusterfs=heketi-deployment\" -o=json"
register: "heketi_result"
until: "heketi_result.stdout|from_json|json_query('items[*]')|length == 0"
retries: 60
delay: 5
- name: Tear down glusterfs. # noqa 301
command: "{{ bin_dir }}/kubectl delete daemonset.extensions/glusterfs"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi storage service. # noqa 301
command: "{{ bin_dir }}/kubectl delete service heketi-storage-endpoints"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi gluster role binding # noqa 301
command: "{{ bin_dir }}/kubectl delete clusterrolebinding heketi-gluster-admin"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi config secret # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-config-secret"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi db backup # noqa 301
command: "{{ bin_dir }}/kubectl delete secret heketi-db-backup"
ignore_errors: true # noqa ignore-errors
- name: Remove heketi service account # noqa 301
command: "{{ bin_dir }}/kubectl delete serviceaccount heketi-service-account"
ignore_errors: true # noqa ignore-errors
- name: Get secrets
command: "{{ bin_dir }}/kubectl get secrets --output=\"json\""
register: "secrets"
changed_when: false
- name: Remove heketi storage secret
vars: { storage_query: "items[?metadata.annotations.\"kubernetes.io/service-account.name\"=='heketi-service-account'].metadata.name|[0]" }
command: "{{ bin_dir }}/kubectl delete secret {{ secrets.stdout|from_json|json_query(storage_query) }}"
when: "storage_query is defined"
ignore_errors: true # noqa ignore-errors