Démarrer notre cluster kubernetes
Rédigé par dada / 01 novembre 2018 / 4 commentaires
Configurez le master
Initialisation
kubeadm init --pod-network-cidr=10.244.0.0/16
root@k8smaster:/home/dada# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[..]
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
Configurez l'utilisateur qui aura le droit de jouer avec k8s
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Configurer le réseau du cluster
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
dada@k8smaster:~$ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
kubectl get pods --all-namespaces -o wide
dada@k8smaster:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
kube-system coredns-576cbf47c7-84x8w 1/1 Running 0 11m 10.244.0.8 k8smaster <none>
kube-system coredns-576cbf47c7-v88p4 1/1 Running 0 11m 10.244.0.9 k8smaster <none>
kube-system etcd-k8smaster 1/1 Running 0 79s 192.168.0.19 k8smaster <none>
kube-system kube-apiserver-k8smaster 1/1 Running 0 79s 192.168.0.19 k8smaster <none>
kube-system kube-controller-manager-k8smaster 1/1 Running 0 79s 192.168.0.19 k8smaster <none>
kube-system kube-flannel-ds-amd64-vzrx8 1/1 Running 0 90s 192.168.0.19 k8smaster <none>
kube-system kube-proxy-nn7p2 1/1 Running 0 11m 192.168.0.19 k8smaster <none>
kube-system kube-scheduler-k8smaster 1/1 Running 0 78s 192.168.0.19 k8smaster <none>
Ajouter un node dans le cluster
dada@k8smaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 23m v1.12.2
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.0.19:6443 --token wdjnql.rm60fa90l0o9qv49 --discovery-token-ca-cert-hash sha256:ede807fb6f732c00acb8d40a891c436aedd3ed88f915135df252c033d55b2e10
root@k8snode1:/home/dada# kubeadm join 192.168.0.19:6443 --token wdjnql.rm60fa90l0o9qv49 --discovery-token-ca-cert-hash sha256:ede807fb6f732c00acb8d40a891c436aedd3ed88f915135df252c033d55b2e10Retournez taper la commande get nodes sur le master :
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server "192.168.0.19:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.19:6443"
[discovery] Requesting info from "https://192.168.0.19:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.19:6443"
[discovery] Successfully established connection with API Server "192.168.0.19:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8snode1" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
dada@k8smaster:~$ kubectl get nodesVous êtes deux !
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 26m v1.12.2
k8snode1 Ready <none> 3m20s v1.12.2
dada@k8smaster:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
kube-system coredns-576cbf47c7-84x8w 1/1 Running 0 29m 10.244.0.8 k8smaster <none>
kube-system coredns-576cbf47c7-v88p4 1/1 Running 0 29m 10.244.0.9 k8smaster <none>
kube-system etcd-k8smaster 1/1 Running 0 19m 192.168.0.19 k8smaster <none>
kube-system kube-apiserver-k8smaster 1/1 Running 0 19m 192.168.0.19 k8smaster <none>
kube-system kube-controller-manager-k8smaster 1/1 Running 0 19m 192.168.0.19 k8smaster <none>
kube-system kube-flannel-ds-amd64-6qm8n 1/1 Running 0 6m57s 192.168.0.30 k8snode1 <none>
kube-system kube-flannel-ds-amd64-vzrx8 1/1 Running 0 19m 192.168.0.19 k8smaster <none>
kube-system kube-proxy-nn7p2 1/1 Running 0 29m 192.168.0.19 k8smaster <none>
kube-system kube-proxy-phfww 1/1 Running 0 6m57s 192.168.0.30 k8snode1 <none>
kube-system kube-scheduler-k8smaster 1/1 Running 0 19m 192.168.0.19 k8smaster <none>
dada@k8smaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 41m v1.12.2
k8snode1 Ready <none> 18m v1.12.2
k8snode2 Ready <none> 6s v1.12.
Bien joué !
Ça ne marche pas comme prévu ?
Très rapide abécédaire des commandes que vous pouvez utiliser si ça coince :
- kubeadm reset -f : elle supprime toute la configuration du cluster du node sur laquelle elle est exécutée. Ça permet de repartir de zéro.
- kubectl delete node <nomdunode> : supprime le node passé en paramètre du cluster, en gardant sa configuration.
La suite ?
Affichez le dashboard de k8s, parce que de la couleur et une interface, c'est mieux.