ansible role update

This commit is contained in:
havelight-ee
2022-12-09 13:38:44 +09:00
parent 8391ca915d
commit 3af7e034fc
890 changed files with 79234 additions and 0 deletions

View File

@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -0,0 +1,17 @@
apiVersion: v1
appVersion: 3.5.5
description: Centralized service for maintaining configuration information, naming,
providing distributed synchronization, and providing group services.
home: https://zookeeper.apache.org/
icon: https://zookeeper.apache.org/images/zookeeper_small.gif
kubeVersion: ^1.10.0-0
maintainers:
- email: lachlan.evenson@microsoft.com
name: lachie83
- email: owensk@google.com
name: kow3ns
name: zookeeper
sources:
- https://github.com/apache/zookeeper
- https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper
version: 2.1.4

View File

@@ -0,0 +1,6 @@
approvers:
- lachie83
- kow3ns
reviewers:
- lachie83
- kow3ns

View File

@@ -0,0 +1,145 @@
# incubator/zookeeper
This helm chart provides an implementation of the ZooKeeper [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) found in Kubernetes Contrib [Zookeeper StatefulSet](https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper).
## Prerequisites
* Kubernetes 1.10+
* PersistentVolume support on the underlying infrastructure
* A dynamic provisioner for the PersistentVolumes
* A familiarity with [Apache ZooKeeper 3.5.x](https://zookeeper.apache.org/doc/r3.5.5/)
## Chart Components
This chart will do the following:
* Create a fixed size ZooKeeper ensemble using a [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/).
* Create a [PodDisruptionBudget](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-disruption-budget/) so kubectl drain will respect the Quorum size of the ensemble.
* Create a [Headless Service](https://kubernetes.io/docs/concepts/services-networking/service/) to control the domain of the ZooKeeper ensemble.
* Create a Service configured to connect to the available ZooKeeper instance on the configured client port.
* Optionally apply a [Pod Anti-Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) to spread the ZooKeeper ensemble across nodes.
* Optionally start JMX Exporter and Zookeeper Exporter containers inside Zookeeper pods.
* Optionally create a job which creates Zookeeper chroots (e.g. `/kafka1`).
* Optionally create a Prometheus ServiceMonitor for each enabled exporter container
## Installing the Chart
You can install the chart with the release name `zookeeper` as below.
```console
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install --name zookeeper incubator/zookeeper
```
If you do not specify a name, helm will select a name for you.
### Installed Components
You can use `kubectl get` to view all of the installed components.
```console{%raw}
$ kubectl get all -l app=zookeeper
NAME: zookeeper
LAST DEPLOYED: Wed Apr 11 17:09:48 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zookeeper N/A 1 1 2m
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zookeeper-headless ClusterIP None <none> 2181/TCP,3888/TCP,2888/TCP 2m
zookeeper ClusterIP 10.98.179.165 <none> 2181/TCP 2m
==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
zookeeper 3 3 2m
==> monitoring.coreos.com/v1/ServiceMonitor
NAME AGE
zookeeper 2m
zookeeper-exporter 2m
```
1. `statefulsets/zookeeper` is the StatefulSet created by the chart.
1. `po/zookeeper-<0|1|2>` are the Pods created by the StatefulSet. Each Pod has a single container running a ZooKeeper server.
1. `svc/zookeeper-headless` is the Headless Service used to control the network domain of the ZooKeeper ensemble.
1. `svc/zookeeper` is a Service that can be used by clients to connect to an available ZooKeeper server.
1. `servicemonitor/zookeeper` is a Prometheus ServiceMonitor which scrapes the jmx-exporter metrics endpoint
1. `servicemonitor/zookeeper-exporter` is a Prometheus ServiceMonitor which scrapes the zookeeper-exporter metrics endpoint
## Configuration
You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install --name my-release -f values.yaml incubator/zookeeper
```
## Default Values
- You can find all user-configurable settings, their defaults and commentary about them in [values.yaml](values.yaml).
## Deep Dive
## Image Details
The image used for this chart is based on Alpine 3.9.0.
## JVM Details
The Java Virtual Machine used for this chart is the OpenJDK JVM 8u192 JRE (headless).
## ZooKeeper Details
The chart defaults to ZooKeeper 3.5 (latest released version).
## Failover
You can test failover by killing the leader. Insert a key:
```console
$ kubectl exec zookeeper-0 -- bin/zkCli.sh create /foo bar;
$ kubectl exec zookeeper-2 -- bin/zkCli.sh get /foo;
```
Watch existing members:
```console
$ kubectl run --attach bbox --image=busybox --restart=Never -- sh -c 'while true; do for i in 0 1 2; do echo zk-${i} $(echo stats | nc <pod-name>-${i}.<headless-service-name>:2181 | grep Mode); sleep 1; done; done';
zk-2 Mode: follower
zk-0 Mode: follower
zk-1 Mode: leader
zk-2 Mode: follower
```
Delete Pods and wait for the StatefulSet controller to bring them back up:
```console
$ kubectl delete po -l app=zookeeper
$ kubectl get po --watch-only
NAME READY STATUS RESTARTS AGE
zookeeper-0 0/1 Running 0 35s
zookeeper-0 1/1 Running 0 50s
zookeeper-1 0/1 Pending 0 0s
zookeeper-1 0/1 Pending 0 0s
zookeeper-1 0/1 ContainerCreating 0 0s
zookeeper-1 0/1 Running 0 19s
zookeeper-1 1/1 Running 0 40s
zookeeper-2 0/1 Pending 0 0s
zookeeper-2 0/1 Pending 0 0s
zookeeper-2 0/1 ContainerCreating 0 0s
zookeeper-2 0/1 Running 0 19s
zookeeper-2 1/1 Running 0 41s
```
Check the previously inserted key:
```console
$ kubectl exec zookeeper-1 -- bin/zkCli.sh get /foo
ionid = 0x354887858e80035, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
bar
```
## Scaling
ZooKeeper can not be safely scaled in versions prior to 3.5.x
## Limitations
* Only supports storage options that have backends for persistent volume claims.

View File

@@ -0,0 +1,7 @@
Thank you for installing ZooKeeper on your Kubernetes cluster. More information
about ZooKeeper can be found at https://zookeeper.apache.org/doc/current/
Your connection string should look like:
{{ template "zookeeper.fullname" . }}-0.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},{{ template "zookeeper.fullname" . }}-1.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},...
You can also use the client service {{ template "zookeeper.fullname" . }}:{{ .Values.service.ports.client.port }} to connect to an available ZooKeeper server.

View File

@@ -0,0 +1,46 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "zookeeper.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "zookeeper.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "zookeeper.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
The name of the zookeeper headless service.
*/}}
{{- define "zookeeper.headless" -}}
{{- printf "%s-headless" (include "zookeeper.fullname" .) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
The name of the zookeeper chroots job.
*/}}
{{- define "zookeeper.chroots" -}}
{{- printf "%s-chroots" (include "zookeeper.fullname" .) | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{- if .Values.exporters.jmx.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-jmx-exporter
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.yml: |-
hostPort: 127.0.0.1:{{ .Values.env.JMXPORT }}
lowercaseOutputName: {{ .Values.exporters.jmx.config.lowercaseOutputName }}
rules:
{{ .Values.exporters.jmx.config.rules | toYaml | indent 6 }}
ssl: false
startDelaySeconds: {{ .Values.exporters.jmx.config.startDelaySeconds }}
{{- end }}

View File

@@ -0,0 +1,110 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
data:
ok: |
#!/bin/sh
zkServer.sh status
ready: |
#!/bin/sh
echo ruok | nc 127.0.0.1 ${1:-2181}
run: |
#!/bin/bash
set -a
ROOT=$(echo /apache-zookeeper-*)
ZK_USER=${ZK_USER:-"zookeeper"}
ZK_LOG_LEVEL=${ZK_LOG_LEVEL:-"INFO"}
ZK_DATA_DIR=${ZK_DATA_DIR:-"/data"}
ZK_DATA_LOG_DIR=${ZK_DATA_LOG_DIR:-"/data/log"}
ZK_CONF_DIR=${ZK_CONF_DIR:-"/conf"}
ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181}
ZK_SERVER_PORT=${ZK_SERVER_PORT:-2888}
ZK_ELECTION_PORT=${ZK_ELECTION_PORT:-3888}
ZK_TICK_TIME=${ZK_TICK_TIME:-2000}
ZK_INIT_LIMIT=${ZK_INIT_LIMIT:-10}
ZK_SYNC_LIMIT=${ZK_SYNC_LIMIT:-5}
ZK_HEAP_SIZE=${ZK_HEAP_SIZE:-2G}
ZK_MAX_CLIENT_CNXNS=${ZK_MAX_CLIENT_CNXNS:-60}
ZK_MIN_SESSION_TIMEOUT=${ZK_MIN_SESSION_TIMEOUT:- $((ZK_TICK_TIME*2))}
ZK_MAX_SESSION_TIMEOUT=${ZK_MAX_SESSION_TIMEOUT:- $((ZK_TICK_TIME*20))}
ZK_SNAP_RETAIN_COUNT=${ZK_SNAP_RETAIN_COUNT:-3}
ZK_PURGE_INTERVAL=${ZK_PURGE_INTERVAL:-0}
ID_FILE="$ZK_DATA_DIR/myid"
ZK_CONFIG_FILE="$ZK_CONF_DIR/zoo.cfg"
LOG4J_PROPERTIES="$ZK_CONF_DIR/log4j.properties"
HOST=$(hostname)
DOMAIN=`hostname -d`
JVMFLAGS="-Xmx$ZK_HEAP_SIZE -Xms$ZK_HEAP_SIZE"
APPJAR=$(echo $ROOT/*jar)
CLASSPATH="${ROOT}/lib/*:${APPJAR}:${ZK_CONF_DIR}:"
if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
NAME=${BASH_REMATCH[1]}
ORD=${BASH_REMATCH[2]}
MY_ID=$((ORD+1))
else
echo "Failed to extract ordinal from hostname $HOST"
exit 1
fi
mkdir -p $ZK_DATA_DIR
mkdir -p $ZK_DATA_LOG_DIR
echo $MY_ID >> $ID_FILE
echo "clientPort=$ZK_CLIENT_PORT" >> $ZK_CONFIG_FILE
echo "dataDir=$ZK_DATA_DIR" >> $ZK_CONFIG_FILE
echo "dataLogDir=$ZK_DATA_LOG_DIR" >> $ZK_CONFIG_FILE
echo "tickTime=$ZK_TICK_TIME" >> $ZK_CONFIG_FILE
echo "initLimit=$ZK_INIT_LIMIT" >> $ZK_CONFIG_FILE
echo "syncLimit=$ZK_SYNC_LIMIT" >> $ZK_CONFIG_FILE
echo "maxClientCnxns=$ZK_MAX_CLIENT_CNXNS" >> $ZK_CONFIG_FILE
echo "minSessionTimeout=$ZK_MIN_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE
echo "maxSessionTimeout=$ZK_MAX_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE
echo "autopurge.snapRetainCount=$ZK_SNAP_RETAIN_COUNT" >> $ZK_CONFIG_FILE
echo "autopurge.purgeInterval=$ZK_PURGE_INTERVAL" >> $ZK_CONFIG_FILE
echo "4lw.commands.whitelist=*" >> $ZK_CONFIG_FILE
for (( i=1; i<=$ZK_REPLICAS; i++ ))
do
echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE
done
rm -f $LOG4J_PROPERTIES
echo "zookeeper.root.logger=$ZK_LOG_LEVEL, CONSOLE" >> $LOG4J_PROPERTIES
echo "zookeeper.console.threshold=$ZK_LOG_LEVEL" >> $LOG4J_PROPERTIES
echo "zookeeper.log.threshold=$ZK_LOG_LEVEL" >> $LOG4J_PROPERTIES
echo "zookeeper.log.dir=$ZK_DATA_LOG_DIR" >> $LOG4J_PROPERTIES
echo "zookeeper.log.file=zookeeper.log" >> $LOG4J_PROPERTIES
echo "zookeeper.log.maxfilesize=256MB" >> $LOG4J_PROPERTIES
echo "zookeeper.log.maxbackupindex=10" >> $LOG4J_PROPERTIES
echo "zookeeper.tracelog.dir=$ZK_DATA_LOG_DIR" >> $LOG4J_PROPERTIES
echo "zookeeper.tracelog.file=zookeeper_trace.log" >> $LOG4J_PROPERTIES
echo "log4j.rootLogger=\${zookeeper.root.logger}" >> $LOG4J_PROPERTIES
echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender" >> $LOG4J_PROPERTIES
echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}" >> $LOG4J_PROPERTIES
echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout" >> $LOG4J_PROPERTIES
echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n" >> $LOG4J_PROPERTIES
if [ -n "$JMXDISABLE" ]
then
MAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
else
MAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain"
fi
set -x
exec java -cp "$CLASSPATH" $JVMFLAGS $MAIN $ZK_CONFIG_FILE

View File

@@ -0,0 +1,65 @@
{{- if .Values.jobs.chroots.enabled }}
{{- $root := . }}
{{- $job := .Values.jobs.chroots }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "zookeeper.chroots" . }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: jobs
job: chroots
spec:
activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }}
backoffLimit: {{ $job.backoffLimit }}
completions: {{ $job.completions }}
parallelism: {{ $job.parallelism }}
template:
metadata:
labels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: jobs
job: chroots
spec:
restartPolicy: {{ $job.restartPolicy }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
containers:
- name: main
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /bin/bash
- -o
- pipefail
- -euc
{{- $port := .Values.service.ports.client.port }}
- >
sleep 15;
export SERVER={{ template "zookeeper.fullname" $root }}:{{ $port }};
{{- range $job.config.create }}
echo '==> {{ . }}';
echo '====> Create chroot if does not exist.';
zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid'
|| zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} create {{ . }} "";
echo '====> Confirm chroot exists.';
zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid';
echo '====> Chroot exists.';
{{- end }}
env:
{{- range $key, $value := $job.env }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
resources:
{{ toYaml $job.resources | indent 12 }}
{{- end -}}

View File

@@ -0,0 +1,17 @@
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
spec:
selector:
matchLabels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: server
{{ toYaml .Values.podDisruptionBudget | indent 2 }}

View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "zookeeper.headless" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.headless.annotations }}
annotations:
{{ .Values.headless.annotations | toYaml | trimSuffix "\n" | indent 4 }}
{{- end }}
spec:
clusterIP: None
{{- if .Values.headless.publishNotReadyAddresses }}
publishNotReadyAddresses: true
{{- end }}
ports:
{{- range $key, $port := .Values.ports }}
- name: {{ $key }}
port: {{ $port.containerPort }}
targetPort: {{ $key }}
protocol: {{ $port.protocol }}
{{- end }}
selector:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}

View File

@@ -0,0 +1,41 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.service.annotations }}
annotations:
{{- with .Values.service.annotations }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
{{- range $key, $value := .Values.service.ports }}
- name: {{ $key }}
{{ toYaml $value | indent 6 }}
{{- end }}
{{- if .Values.exporters.jmx.enabled }}
{{- range $key, $port := .Values.exporters.jmx.ports }}
- name: {{ $key }}
port: {{ $port.containerPort }}
targetPort: {{ $key }}
protocol: {{ $port.protocol }}
{{- end }}
{{- end}}
{{- if .Values.exporters.zookeeper.enabled }}
{{- range $key, $port := .Values.exporters.zookeeper.ports }}
- name: {{ $key }}
port: {{ $port.containerPort }}
targetPort: {{ $key }}
protocol: {{ $port.protocol }}
{{- end }}
{{- end}}
selector:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}

View File

@@ -0,0 +1,56 @@
{{- if and .Values.exporters.jmx.enabled .Values.prometheus.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "zookeeper.fullname" . }}
{{- if .Values.prometheus.serviceMonitor.namespace }}
namespace: {{ .Values.prometheus.serviceMonitor.namespace }}
{{- end }}
labels:
{{ toYaml .Values.prometheus.serviceMonitor.selector | indent 4 }}
spec:
endpoints:
{{- range $key, $port := .Values.exporters.jmx.ports }}
- port: {{ $key }}
path: {{ $.Values.exporters.jmx.path }}
interval: {{ $.Values.exporters.jmx.serviceMonitor.interval }}
scrapeTimeout: {{ $.Values.exporters.jmx.serviceMonitor.scrapeTimeout }}
scheme: {{ $.Values.exporters.jmx.serviceMonitor.scheme }}
{{- end }}
selector:
matchLabels:
app: {{ include "zookeeper.name" . }}
release: {{ .Release.Name }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
{{- end }}
---
{{- if and .Values.exporters.zookeeper.enabled .Values.prometheus.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "zookeeper.fullname" . }}-exporter
{{- if .Values.prometheus.serviceMonitor.namespace }}
namespace: {{ .Values.prometheus.serviceMonitor.namespace }}
{{- end }}
labels:
{{ toYaml .Values.prometheus.serviceMonitor.selector | indent 4 }}
spec:
endpoints:
{{- range $key, $port := .Values.exporters.zookeeper.ports }}
- port: {{ $key }}
path: {{ $.Values.exporters.zookeeper.path }}
interval: {{ $.Values.exporters.zookeeper.serviceMonitor.interval }}
scrapeTimeout: {{ $.Values.exporters.zookeeper.serviceMonitor.scrapeTimeout }}
scheme: {{ $.Values.exporters.zookeeper.serviceMonitor.scheme }}
{{- end }}
selector:
matchLabels:
app: {{ include "zookeeper.name" . }}
release: {{ .Release.Name }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,226 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ template "zookeeper.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: server
spec:
serviceName: {{ template "zookeeper.headless" . }}
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: server
updateStrategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
template:
metadata:
labels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
component: server
{{- if .Values.podLabels }}
## Custom pod labels
{{- range $key, $value := .Values.podLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
## Custom pod annotations
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
containers:
- name: zookeeper
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.command }}
command: {{ range . }}
- {{ . | quote }}
{{- end }}
{{- end }}
ports:
{{- range $key, $port := .Values.ports }}
- name: {{ $key }}
{{ toYaml $port | indent 14 }}
{{- end }}
livenessProbe:
exec:
command:
- sh
- /config-scripts/ok
initialDelaySeconds: 20
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 2
successThreshold: 1
readinessProbe:
exec:
command:
- sh
- /config-scripts/ready
initialDelaySeconds: 20
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 2
successThreshold: 1
env:
- name: ZK_REPLICAS
value: {{ .Values.replicaCount | quote }}
{{- range $key, $value := .Values.env }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
{{- range $secret := .Values.secrets }}
{{- range $key := $secret.keys }}
- name: {{ (print $secret.name "_" $key) | upper }}
valueFrom:
secretKeyRef:
name: {{ $secret.name }}
key: {{ $key }}
{{- end }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumeMounts:
- name: data
mountPath: /data
{{- range $secret := .Values.secrets }}
{{- if $secret.mountPath }}
{{- range $key := $secret.keys }}
- name: {{ $.Release.Name }}-{{ $secret.name }}
mountPath: {{ $secret.mountPath }}/{{ $key }}
subPath: {{ $key }}
readOnly: true
{{- end }}
{{- end }}
{{- end }}
- name: config
mountPath: /config-scripts
{{- if .Values.exporters.jmx.enabled }}
- name: jmx-exporter
image: "{{ .Values.exporters.jmx.image.repository }}:{{ .Values.exporters.jmx.image.tag }}"
imagePullPolicy: {{ .Values.exporters.jmx.image.pullPolicy }}
ports:
{{- range $key, $port := .Values.exporters.jmx.ports }}
- name: {{ $key }}
{{ toYaml $port | indent 14 }}
{{- end }}
livenessProbe:
{{ toYaml .Values.exporters.jmx.livenessProbe | indent 12 }}
readinessProbe:
{{ toYaml .Values.exporters.jmx.readinessProbe | indent 12 }}
env:
- name: SERVICE_PORT
value: {{ .Values.exporters.jmx.ports.jmxxp.containerPort | quote }}
{{- with .Values.exporters.jmx.env }}
{{- range $key, $value := . }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
resources:
{{ toYaml .Values.exporters.jmx.resources | indent 12 }}
volumeMounts:
- name: config-jmx-exporter
mountPath: /opt/jmx_exporter/config.yml
subPath: config.yml
{{- end }}
{{- if .Values.exporters.zookeeper.enabled }}
- name: zookeeper-exporter
image: "{{ .Values.exporters.zookeeper.image.repository }}:{{ .Values.exporters.zookeeper.image.tag }}"
imagePullPolicy: {{ .Values.exporters.zookeeper.image.pullPolicy }}
args:
- -bind-addr=:{{ .Values.exporters.zookeeper.ports.zookeeperxp.containerPort }}
- -metrics-path={{ .Values.exporters.zookeeper.path }}
- -zookeeper=localhost:{{ .Values.ports.client.containerPort }}
- -log-level={{ .Values.exporters.zookeeper.config.logLevel }}
- -reset-on-scrape={{ .Values.exporters.zookeeper.config.resetOnScrape }}
ports:
{{- range $key, $port := .Values.exporters.zookeeper.ports }}
- name: {{ $key }}
{{ toYaml $port | indent 14 }}
{{- end }}
livenessProbe:
{{ toYaml .Values.exporters.zookeeper.livenessProbe | indent 12 }}
readinessProbe:
{{ toYaml .Values.exporters.zookeeper.readinessProbe | indent 12 }}
env:
{{- range $key, $value := .Values.exporters.zookeeper.env }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
resources:
{{ toYaml .Values.exporters.zookeeper.resources | indent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "zookeeper.fullname" . }}
defaultMode: 0555
{{- range .Values.secrets }}
- name: {{ $.Release.Name }}-{{ .name }}
secret:
secretName: {{ .name }}
{{- end }}
{{- if .Values.exporters.jmx.enabled }}
- name: config-jmx-exporter
configMap:
name: {{ .Release.Name }}-jmx-exporter
{{- end }}
{{- if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,300 @@
## As weighted quorums are not supported, it is imperative that an odd number of replicas
## be chosen. Moreover, the number of replicas should be either 1, 3, 5, or 7.
##
## ref: https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper#stateful-set
replicaCount: 3 # Desired quantity of ZooKeeper pods. This should always be (1,3,5, or 7)
podDisruptionBudget:
maxUnavailable: 1 # Limits how many Zokeeper pods may be unavailable due to voluntary disruptions.
terminationGracePeriodSeconds: 1800 # Duration in seconds a Zokeeper pod needs to terminate gracefully.
updateStrategy:
type: RollingUpdate
## refs:
## - https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper
## - https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/Makefile#L1
image:
repository: zookeeper # Container image repository for zookeeper container.
tag: 3.5.5 # Container image tag for zookeeper container.
pullPolicy: IfNotPresent # Image pull criteria for zookeeper container.
service:
type: ClusterIP # Exposes zookeeper on a cluster-internal IP.
annotations: {} # Arbitrary non-identifying metadata for zookeeper service.
## AWS example for use with LoadBalancer service type.
# external-dns.alpha.kubernetes.io/hostname: zookeeper.cluster.local
# service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# service.beta.kubernetes.io/aws-load-balancer-internal: "true"
ports:
client:
port: 2181 # Service port number for client port.
targetPort: client # Service target port for client port.
protocol: TCP # Service port protocol for client port.
## Headless service.
##
headless:
annotations: {}
# publishNotReadyAddresses, default false for backward compatibility
# set to true to register DNS entries for unready pods, which helps in rare
# occasions when cluster is unable to be created, DNS caching is enforced
# or pods are in persistent crash loop
publishNotReadyAddresses: false
ports:
client:
containerPort: 2181 # Port number for zookeeper container client port.
protocol: TCP # Protocol for zookeeper container client port.
election:
containerPort: 3888 # Port number for zookeeper container election port.
protocol: TCP # Protocol for zookeeper container election port.
server:
containerPort: 2888 # Port number for zookeeper container server port.
protocol: TCP # Protocol for zookeeper container server port.
resources: {} # Optionally specify how much CPU and memory (RAM) each zookeeper container needs.
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
priorityClassName: ""
nodeSelector: {} # Node label-values required to run zookeeper pods.
tolerations: [] # Node taint overrides for zookeeper pods.
affinity: {} # Criteria by which pod label-values influence scheduling for zookeeper pods.
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - topologyKey: "kubernetes.io/hostname"
# labelSelector:
# matchLabels:
# release: zookeeper
podAnnotations: {} # Arbitrary non-identifying metadata for zookeeper pods.
# prometheus.io/scrape: "true"
# prometheus.io/path: "/metrics"
# prometheus.io/port: "9141"
podLabels: {} # Key/value pairs that are attached to zookeeper pods.
# team: "developers"
# service: "zookeeper"
securityContext:
fsGroup: 1000
runAsUser: 1000
## Useful, if you want to use an alternate image.
command:
- /bin/bash
- -xec
- /config-scripts/run
## Useful if using any custom authorizer.
## Pass any secrets to the kafka pods. Each secret will be passed as an
## environment variable by default. The secret can also be mounted to a
## specific path (in addition to environment variable) if required. Environment
## variable names are generated as: `<secretName>_<secretKey>` (All upper case)
# secrets:
# - name: myKafkaSecret
# keys:
# - username
# - password
# # mountPath: /opt/kafka/secret
# - name: myZkSecret
# keys:
# - user
# - pass
# mountPath: /opt/zookeeper/secret
persistence:
enabled: true
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 5Gi
## Exporters query apps for metrics and make those metrics available for
## Prometheus to scrape.
exporters:
jmx:
enabled: false
image:
repository: sscaling/jmx-prometheus-exporter
tag: 0.3.0
pullPolicy: IfNotPresent
config:
lowercaseOutputName: false
## ref: https://github.com/prometheus/jmx_exporter/blob/master/example_configs/zookeeper.yaml
rules:
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+)><>(\\w+)"
name: "zookeeper_$2"
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+), name1=replica.(\\d+)><>(\\w+)"
name: "zookeeper_$3"
labels:
replicaId: "$2"
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+), name1=replica.(\\d+), name2=(\\w+)><>(\\w+)"
name: "zookeeper_$4"
labels:
replicaId: "$2"
memberType: "$3"
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+), name1=replica.(\\d+), name2=(\\w+), name3=(\\w+)><>(\\w+)"
name: "zookeeper_$4_$5"
labels:
replicaId: "$2"
memberType: "$3"
startDelaySeconds: 30
env: {}
resources: {}
path: /metrics
ports:
jmxxp:
containerPort: 9404
protocol: TCP
livenessProbe:
httpGet:
path: /metrics
port: jmxxp
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 60
failureThreshold: 8
successThreshold: 1
readinessProbe:
httpGet:
path: /metrics
port: jmxxp
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 60
failureThreshold: 8
successThreshold: 1
serviceMonitor:
interval: 30s
scrapeTimeout: 30s
scheme: http
zookeeper:
## refs:
## - https://github.com/carlpett/zookeeper_exporter
## - https://hub.docker.com/r/josdotso/zookeeper-exporter/
## - https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/#zookeeper-metrics
enabled: false
image:
repository: josdotso/zookeeper-exporter
tag: v1.1.2
pullPolicy: IfNotPresent
config:
logLevel: info
resetOnScrape: "true"
env: {}
resources: {}
path: /metrics
ports:
zookeeperxp:
containerPort: 9141
protocol: TCP
livenessProbe:
httpGet:
path: /metrics
port: zookeeperxp
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 60
failureThreshold: 8
successThreshold: 1
readinessProbe:
httpGet:
path: /metrics
port: zookeeperxp
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 60
failureThreshold: 8
successThreshold: 1
serviceMonitor:
interval: 30s
scrapeTimeout: 30s
scheme: http
## ServiceMonitor configuration in case you are using Prometheus Operator
prometheus:
serviceMonitor:
## If true a ServiceMonitor for each enabled exporter will be installed
enabled: false
## The namespace where the ServiceMonitor(s) will be installed
# namespace: monitoring
## The selector the Prometheus instance is searching for
## [Default Prometheus Operator selector] (https://github.com/helm/charts/blob/f5a751f174263971fafd21eee4e35416d6612a3d/stable/prometheus-operator/templates/prometheus/prometheus.yaml#L74)
selector: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## ref: https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper
env:
## Options related to JMX exporter.
## ref: https://github.com/apache/zookeeper/blob/master/bin/zkServer.sh#L36
JMXAUTH: "false"
JMXDISABLE: "false"
JMXPORT: 1099
JMXSSL: "false"
## The port on which the server will accept client requests.
ZOO_PORT: 2181
## The number of Ticks that an ensemble member is allowed to perform leader
## election.
ZOO_INIT_LIMIT: 5
ZOO_TICK_TIME: 2000
## The maximum number of concurrent client connections that
## a server in the ensemble will accept.
ZOO_MAX_CLIENT_CNXNS: 60
## The number of Tick by which a follower may lag behind the ensembles leader.
ZK_SYNC_LIMIT: 10
## The number of wall clock ms that corresponds to a Tick for the ensembles
## internal time.
ZK_TICK_TIME: 2000
ZOO_AUTOPURGE_PURGEINTERVAL: 0
ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
ZOO_STANDALONE_ENABLED: false
jobs:
## ref: http://zookeeper.apache.org/doc/r3.4.10/zookeeperProgrammers.html#ch_zkSessions
chroots:
enabled: false
activeDeadlineSeconds: 300
backoffLimit: 5
completions: 1
config:
create: []
# - /kafka
# - /ureplicator
env: []
parallelism: 1
resources: {}
restartPolicy: Never