dsk-dev kubespray 이동
This commit is contained in:
73
ansible/kubespray/docs/kubernetes-apps/cephfs_provisioner.md
Normal file
73
ansible/kubespray/docs/kubernetes-apps/cephfs_provisioner.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# CephFS Volume Provisioner for Kubernetes 1.5+
|
||||
|
||||
[](https://quay.io/repository/external_storage/cephfs-provisioner)
|
||||
|
||||
Using Ceph volume client
|
||||
|
||||
## Development
|
||||
|
||||
Compile the provisioner
|
||||
|
||||
``` console
|
||||
make
|
||||
```
|
||||
|
||||
Make the container image and push to the registry
|
||||
|
||||
``` console
|
||||
make push
|
||||
```
|
||||
|
||||
## Test instruction
|
||||
|
||||
- Start Kubernetes local cluster
|
||||
|
||||
See [Kubernetes](https://kubernetes.io/)
|
||||
|
||||
- Create a Ceph admin secret
|
||||
|
||||
``` bash
|
||||
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
|
||||
kubectl create ns cephfs
|
||||
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs
|
||||
```
|
||||
|
||||
- Start CephFS provisioner
|
||||
|
||||
The following example uses `cephfs-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
|
||||
|
||||
``` bash
|
||||
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1
|
||||
```
|
||||
|
||||
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
|
||||
|
||||
- Create a CephFS Storage Class
|
||||
|
||||
Replace Ceph monitor's IP in [example class](example/class.yaml) with your own and create storage class:
|
||||
|
||||
``` bash
|
||||
kubectl create -f example/class.yaml
|
||||
```
|
||||
|
||||
- Create a claim
|
||||
|
||||
``` bash
|
||||
kubectl create -f example/claim.yaml
|
||||
```
|
||||
|
||||
- Create a Pod using the claim
|
||||
|
||||
``` bash
|
||||
kubectl create -f example/test-pod.yaml
|
||||
```
|
||||
|
||||
## Known limitations
|
||||
|
||||
- Kernel CephFS doesn't work with SELinux, setting SELinux label in Pod's securityContext will not work.
|
||||
- Kernel CephFS doesn't support quota or capacity, capacity requested by PVC is not enforced or validated.
|
||||
- Currently each Ceph user created by the provisioner has `allow r` MDS cap to permit CephFS mount.
|
||||
|
||||
## Acknowledgement
|
||||
|
||||
Inspired by CephFS Manila provisioner and conversation with John Spray
|
||||
@@ -0,0 +1,131 @@
|
||||
# Local Static Storage Provisioner
|
||||
|
||||
The [local static storage provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner)
|
||||
is NOT a dynamic storage provisioner as you would
|
||||
expect from a cloud provider. Instead, it simply creates PersistentVolumes for
|
||||
all mounts under the `host_dir` of the specified storage class.
|
||||
These storage classes are specified in the `local_volume_provisioner_storage_classes` nested dictionary.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
local_volume_provisioner_storage_classes:
|
||||
local-storage:
|
||||
host_dir: /mnt/disks
|
||||
mount_dir: /mnt/disks
|
||||
fast-disks:
|
||||
host_dir: /mnt/fast-disks
|
||||
mount_dir: /mnt/fast-disks
|
||||
block_cleaner_command:
|
||||
- "/scripts/shred.sh"
|
||||
- "2"
|
||||
volume_mode: Filesystem
|
||||
fs_type: ext4
|
||||
```
|
||||
|
||||
For each key in `local_volume_provisioner_storage_classes` a "storage class" with
|
||||
the same name is created in the entry `storageClassMap` of the ConfigMap `local-volume-provisioner`.
|
||||
The subkeys of each storage class in `local_volume_provisioner_storage_classes`
|
||||
are converted to camelCase and added as attributes to the storage class in the
|
||||
ConfigMap.
|
||||
|
||||
The result of the above example is:
|
||||
|
||||
```yaml
|
||||
data:
|
||||
storageClassMap: |
|
||||
local-storage:
|
||||
hostDir: /mnt/disks
|
||||
mountDir: /mnt/disks
|
||||
fast-disks:
|
||||
hostDir: /mnt/fast-disks
|
||||
mountDir: /mnt/fast-disks
|
||||
blockCleanerCommand:
|
||||
- "/scripts/shred.sh"
|
||||
- "2"
|
||||
volumeMode: Filesystem
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
Additionally, a StorageClass object (`storageclasses.storage.k8s.io`) is also
|
||||
created for each storage class:
|
||||
|
||||
```bash
|
||||
$ kubectl get storageclasses.storage.k8s.io
|
||||
NAME PROVISIONER RECLAIMPOLICY
|
||||
fast-disks kubernetes.io/no-provisioner Delete
|
||||
local-storage kubernetes.io/no-provisioner Delete
|
||||
```
|
||||
|
||||
The default StorageClass is `local-storage` on `/mnt/disks`;
|
||||
the rest of this documentation will use that path as an example.
|
||||
|
||||
## Examples to create local storage volumes
|
||||
|
||||
1. Using tmpfs
|
||||
|
||||
```bash
|
||||
for vol in vol1 vol2 vol3; do
|
||||
mkdir /mnt/disks/$vol
|
||||
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
|
||||
done
|
||||
```
|
||||
|
||||
The tmpfs method is not recommended for production because the mounts are not
|
||||
persistent and data will be deleted on reboot.
|
||||
|
||||
1. Mount physical disks
|
||||
|
||||
```bash
|
||||
mkdir /mnt/disks/ssd1
|
||||
mount /dev/vdb1 /mnt/disks/ssd1
|
||||
```
|
||||
|
||||
Physical disks are recommended for production environments because it offers
|
||||
complete isolation in terms of I/O and capacity.
|
||||
|
||||
1. Mount unpartitioned physical devices
|
||||
|
||||
```bash
|
||||
for disk in /dev/sdc /dev/sdd /dev/sde; do
|
||||
ln -s $disk /mnt/disks
|
||||
done
|
||||
```
|
||||
|
||||
This saves time of precreating filesystems. Note that your storageclass must have
|
||||
`volume_mode` set to `"Filesystem"` and `fs_type` defined. If either is not set, the
|
||||
disk will be added as a raw block device.
|
||||
|
||||
1. PersistentVolumes with `volumeMode="Block"`
|
||||
|
||||
Just like above, you can create PersistentVolumes with volumeMode `Block`
|
||||
by creating a symbolic link under discovery directory to the block device on
|
||||
the node, if you set `volume_mode` to `"Block"`. This will create a volume
|
||||
presented into a Pod as a block device, without any filesystem on it.
|
||||
|
||||
1. File-backed sparsefile method
|
||||
|
||||
```bash
|
||||
truncate /mnt/disks/disk5 --size 2G
|
||||
mkfs.ext4 /mnt/disks/disk5
|
||||
mkdir /mnt/disks/vol5
|
||||
mount /mnt/disks/disk5 /mnt/disks/vol5
|
||||
```
|
||||
|
||||
If you have a development environment and only one disk, this is the best way
|
||||
to limit the quota of persistent volumes.
|
||||
|
||||
1. Simple directories
|
||||
|
||||
In a development environment, using `mount --bind` works also, but there is no capacity
|
||||
management.
|
||||
|
||||
## Usage notes
|
||||
|
||||
Make sure to make any mounts persist via `/etc/fstab` or with systemd mounts (for
|
||||
Flatcar Container Linux or Fedora CoreOS). Pods with persistent volume claims will not be
|
||||
able to start if the mounts become unavailable.
|
||||
|
||||
## Further reading
|
||||
|
||||
Refer to the upstream docs here: <https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner>
|
||||
79
ansible/kubespray/docs/kubernetes-apps/rbd_provisioner.md
Normal file
79
ansible/kubespray/docs/kubernetes-apps/rbd_provisioner.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# RBD Volume Provisioner for Kubernetes 1.5+
|
||||
|
||||
`rbd-provisioner` is an out-of-tree dynamic provisioner for Kubernetes 1.5+.
|
||||
You can use it quickly & easily deploy ceph RBD storage that works almost
|
||||
anywhere.
|
||||
|
||||
It works just like in-tree dynamic provisioner. For more information on how
|
||||
dynamic provisioning works, see [the docs](http://kubernetes.io/docs/user-guide/persistent-volumes/)
|
||||
or [this blog post](http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html).
|
||||
|
||||
## Development
|
||||
|
||||
Compile the provisioner
|
||||
|
||||
```console
|
||||
make
|
||||
```
|
||||
|
||||
Make the container image and push to the registry
|
||||
|
||||
```console
|
||||
make push
|
||||
```
|
||||
|
||||
## Test instruction
|
||||
|
||||
* Start Kubernetes local cluster
|
||||
|
||||
See [Kubernetes](https://kubernetes.io/).
|
||||
|
||||
* Create a Ceph admin secret
|
||||
|
||||
```bash
|
||||
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/secret
|
||||
kubectl create secret generic ceph-admin-secret --from-file=/tmp/secret --namespace=kube-system
|
||||
```
|
||||
|
||||
* Create a Ceph pool and a user secret
|
||||
|
||||
```bash
|
||||
ceph osd pool create kube 8 8
|
||||
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
|
||||
ceph auth get-key client.kube > /tmp/secret
|
||||
kubectl create secret generic ceph-secret --from-file=/tmp/secret --namespace=kube-system
|
||||
```
|
||||
|
||||
* Start RBD provisioner
|
||||
|
||||
The following example uses `rbd-provisioner-1` as the identity for the instance and assumes kubeconfig is at `/root/.kube`. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.
|
||||
|
||||
```bash
|
||||
docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host quay.io/external_storage/rbd-provisioner /usr/local/bin/rbd-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=rbd-provisioner-1
|
||||
```
|
||||
|
||||
Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
|
||||
|
||||
* Create a RBD Storage Class
|
||||
|
||||
Replace Ceph monitor's IP in [examples/class.yaml](examples/class.yaml) with your own and create storage class:
|
||||
|
||||
```bash
|
||||
kubectl create -f examples/class.yaml
|
||||
```
|
||||
|
||||
* Create a claim
|
||||
|
||||
```bash
|
||||
kubectl create -f examples/claim.yaml
|
||||
```
|
||||
|
||||
* Create a Pod using the claim
|
||||
|
||||
```bash
|
||||
kubectl create -f examples/test-pod.yaml
|
||||
```
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
* This provisioner is extracted from [Kubernetes core](https://github.com/kubernetes/kubernetes) with some modifications for this project.
|
||||
244
ansible/kubespray/docs/kubernetes-apps/registry.md
Normal file
244
ansible/kubespray/docs/kubernetes-apps/registry.md
Normal file
@@ -0,0 +1,244 @@
|
||||
# Private Docker Registry in Kubernetes
|
||||
|
||||
Kubernetes offers an optional private Docker registry addon, which you can turn
|
||||
on when you bring up a cluster or install later. This gives you a place to
|
||||
store truly private Docker images for your cluster.
|
||||
|
||||
## How it works
|
||||
|
||||
The private registry runs as a `Pod` in your cluster. It does not currently
|
||||
support SSL or authentication, which triggers Docker's "insecure registry"
|
||||
logic. To work around this, we run a proxy on each node in the cluster,
|
||||
exposing a port onto the node (via a hostPort), which Docker accepts as
|
||||
"secure", since it is accessed by `localhost`.
|
||||
|
||||
## Turning it on
|
||||
|
||||
Some cluster installs (e.g. GCE) support this as a cluster-birth flag. The
|
||||
`ENABLE_CLUSTER_REGISTRY` variable in `cluster/gce/config-default.sh` governs
|
||||
whether the registry is run or not. To set this flag, you can specify
|
||||
`KUBE_ENABLE_CLUSTER_REGISTRY=true` when running `kube-up.sh`. If your cluster
|
||||
does not include this flag, the following steps should work. Note that some of
|
||||
this is cloud-provider specific, so you may have to customize it a bit.
|
||||
|
||||
### Make some storage
|
||||
|
||||
The primary job of the registry is to store data. To do that we have to decide
|
||||
where to store it. For cloud environments that have networked storage, we can
|
||||
use Kubernetes's `PersistentVolume` abstraction. The following template is
|
||||
expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
|
||||
other situations:
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: kube-system-kube-registry-pv
|
||||
spec:
|
||||
{% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
|
||||
capacity:
|
||||
storage: {{ pillar['cluster_registry_disk_size'] }}
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
gcePersistentDisk:
|
||||
pdName: "{{ pillar['cluster_registry_disk_name'] }}"
|
||||
fsType: "ext4"
|
||||
{% endif %}
|
||||
```
|
||||
|
||||
If, for example, you wanted to use NFS you would just need to change the
|
||||
`gcePersistentDisk` block to `nfs`. See
|
||||
[here](https://kubernetes.io/docs/concepts/storage/volumes/) for more details on volumes.
|
||||
|
||||
Note that in any case, the storage (in the case the GCE PersistentDisk) must be
|
||||
created independently - this is not something Kubernetes manages for you (yet).
|
||||
|
||||
### I don't want or don't have persistent storage
|
||||
|
||||
If you are running in a place that doesn't have networked storage, or if you
|
||||
just want to kick the tires on this without committing to it, you can easily
|
||||
adapt the `ReplicationController` specification below to use a simple
|
||||
`emptyDir` volume instead of a `persistentVolumeClaim`.
|
||||
|
||||
## Claim the storage
|
||||
|
||||
Now that the Kubernetes cluster knows that some storage exists, you can put a
|
||||
claim on that storage. As with the `PersistentVolume` above, you can start
|
||||
with the `salt` template:
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: kube-registry-pvc
|
||||
namespace: kube-system
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ pillar['cluster_registry_disk_size'] }}
|
||||
```
|
||||
|
||||
This tells Kubernetes that you want to use storage, and the `PersistentVolume`
|
||||
you created before will be bound to this claim (unless you have other
|
||||
`PersistentVolumes` in which case those might get bound instead). This claim
|
||||
gives you the right to use this storage until you release the claim.
|
||||
|
||||
## Run the registry
|
||||
|
||||
Now we can run a Docker registry:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: kube-registry-v0
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: registry
|
||||
version: v0
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: registry
|
||||
version: v0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: registry
|
||||
version: v0
|
||||
spec:
|
||||
containers:
|
||||
- name: registry
|
||||
image: registry:2
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: REGISTRY_HTTP_ADDR
|
||||
value: :5000
|
||||
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
|
||||
value: /var/lib/registry
|
||||
volumeMounts:
|
||||
- name: image-store
|
||||
mountPath: /var/lib/registry
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
name: registry
|
||||
protocol: TCP
|
||||
volumes:
|
||||
- name: image-store
|
||||
persistentVolumeClaim:
|
||||
claimName: kube-registry-pvc
|
||||
```
|
||||
|
||||
*Note:* that if you have set multiple replicas, make sure your CSI driver has support for the `ReadWriteMany` accessMode.
|
||||
|
||||
## Expose the registry in the cluster
|
||||
|
||||
Now that we have a registry `Pod` running, we can expose it as a Service:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kube-registry
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: registry
|
||||
kubernetes.io/name: "KubeRegistry"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: registry
|
||||
ports:
|
||||
- name: registry
|
||||
port: 5000
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
## Expose the registry on each node
|
||||
|
||||
Now that we have a running `Service`, we need to expose it onto each Kubernetes
|
||||
`Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
|
||||
node by creating following daemonset.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: kube-registry-proxy
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-registry-proxy
|
||||
version: v0.4
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-registry-proxy
|
||||
kubernetes.io/name: "kube-registry-proxy"
|
||||
version: v0.4
|
||||
spec:
|
||||
containers:
|
||||
- name: kube-registry-proxy
|
||||
image: gcr.io/google_containers/kube-registry-proxy:0.4
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
env:
|
||||
- name: REGISTRY_HOST
|
||||
value: kube-registry.kube-system.svc.cluster.local
|
||||
- name: REGISTRY_PORT
|
||||
value: "5000"
|
||||
ports:
|
||||
- name: registry
|
||||
containerPort: 80
|
||||
hostPort: 5000
|
||||
```
|
||||
|
||||
When modifying replication-controller, service and daemon-set definitions, take
|
||||
care to ensure *unique* identifiers for the rc-svc couple and the daemon-set.
|
||||
Failing to do so will have register the localhost proxy daemon-sets to the
|
||||
upstream service. As a result they will then try to proxy themselves, which
|
||||
will, for obvious reasons, not work.
|
||||
|
||||
This ensures that port 5000 on each node is directed to the registry `Service`.
|
||||
You should be able to verify that it is running by hitting port 5000 with a web
|
||||
browser and getting a 404 error:
|
||||
|
||||
```ShellSession
|
||||
$ curl localhost:5000
|
||||
404 page not found
|
||||
```
|
||||
|
||||
## Using the registry
|
||||
|
||||
To use an image hosted by this registry, simply say this in your `Pod`'s
|
||||
`spec.containers[].image` field:
|
||||
|
||||
```yaml
|
||||
image: localhost:5000/user/container
|
||||
```
|
||||
|
||||
Before you can use the registry, you have to be able to get images into it,
|
||||
though. If you are building an image on your Kubernetes `Node`, you can spell
|
||||
out `localhost:5000` when you build and push. More likely, though, you are
|
||||
building locally and want to push to your cluster.
|
||||
|
||||
You can use `kubectl` to set up a port-forward from your local node to a
|
||||
running Pod:
|
||||
|
||||
```ShellSession
|
||||
$ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \
|
||||
-o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
|
||||
| grep Running | head -1 | cut -f1 -d' ')
|
||||
|
||||
$ kubectl port-forward --namespace kube-system $POD 5000:5000 &
|
||||
```
|
||||
|
||||
Now you can build and push images on your local computer as
|
||||
`localhost:5000/yourname/container` and those images will be available inside
|
||||
your kubernetes cluster with the same name.
|
||||
Reference in New Issue
Block a user