Getting Started with Kubernetes

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

This Technical document contains configuration used for deployment of Kubernetes.

Todo

  • Install docker.io
  • Install kubernetes

Kubernetes Installation

Kubernetes need docker utility to running node

$ sudo apt-get install docker.io
$ docker --version

enable docker on nodes

$ sudo /etc/init.d docker start

Add the Kubernetes signing key on both the nodes

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Add Xenial Kubernetes Repository on both the nodes

$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
$ sudo apt-get update

Install Kubeadm

$ sudo apt-get install kubeadm
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
java-common libck-connector0:i386 libedit2:i386 libgssapi-krb5-2:i386
libk5crypto3:i386 libkeyutils1:i386 libkrb5-3:i386 libkrb5support0:i386
libsctp1 libssl1.0.0:i386 libwrap0:i386 lksctp-tools python-support
tzdata-java zlib1g-dev
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
conntrack cri-tools ebtables kubectl kubelet kubernetes-cni socat
The following NEW packages will be installed:
conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 553 not upgraded.
Need to get 54,4 MB of archives.
After this operation, 292 MB of additional disk space will be used.
Do you want to continue? [Y/n]

You will be prompted with a Y/n option in order to proceed with the installation. Please enter Y and then hit enter to continue. Kubeadm will then be installed on your system.

check Kubeadm

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:15:39Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes Deployment

Disable swap memory (if running) on both the nodes

$ sudo swapoff -a

Give Unique hostnames to each node

$ sudo cat /etc/hosts
127.0.0.1 localhost
10.10.0.201 kubernetes-node1

login as root and run initialing kubernetes node

root@kubernetes-node1:~# sudo kubeadm init --apiserver-advertise-address=10.10.0.201 --pod-network-cidr=10.12.0.0/16
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.0.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-node1 localhost] and IPs [10.10.0.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-node1 localhost] and IPs [10.10.0.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.501819 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubernetes-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: uptt4h.d20mzq0m7q93z3wq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.0.201:6443 --token uptt4h.d20mzq0m7q93z3wq \
    --discovery-token-ca-cert-hash sha256:c4dfbc0c5bed41ba26fbaa5612260636b20cc0bd04d8fef1951eabc175b68fd5

if you want to reset initialing just follow command below

root@kubernetes-node1:~# sudo kubeadm reset -f

add environment cgroup-driver Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs"

ubuntu@kubernetes-node1:~$ sudo cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

change to regular user follow as instruction initialing

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

after finish the follow initialing check pod or namespaces

ubuntu@kubernetes-node1:~$ sudo kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-5644d7b6d9-qc7nz                   0/1     Pending   0          6m53s
kube-system   coredns-5644d7b6d9-tw2pp                   0/1     Pending   0          6m53s
kube-system   etcd-kubernetes-node1                      1/1     Running   0          5m52s
kube-system   kube-apiserver-kubernetes-node1            1/1     Running   0          6m12s
kube-system   kube-controller-manager-kubernetes-node1   1/1     Running   0          5m55s
kube-system   kube-proxy-9zxw6                           1/1     Running   0          6m53s
kube-system   kube-scheduler-kubernetes-node1            1/1     Running   0          6m14s

if you see coredns the status pending when we describe look like this

ubuntu@kubernetes-node1:~$ kubectl describe pods coredns-5644d7b6d9-qc7nz -n kube-system
Name:                 coredns-5644d7b6d9-qc7nz
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 <none>
Labels:               k8s-app=kube-dns
                      pod-template-hash=5644d7b6d9
Annotations:          <none>
Status:               Pending
IP:
IPs:                  <none>
Controlled By:        ReplicaSet/coredns-5644d7b6d9
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.6.2
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-t8bds (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-t8bds:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-t8bds
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

to enable connection use weave works

ubuntu@kubernetes-node1:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
ubuntu@kubernetes-node1:~$ sudo kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-5644d7b6d9-qc7nz                   0/1     Running   0          21m
kube-system   coredns-5644d7b6d9-tw2pp                   0/1     Running   0          21m
kube-system   etcd-kubernetes-node1                      1/1     Running   0          20m
kube-system   kube-apiserver-kubernetes-node1            1/1     Running   0          20m
kube-system   kube-controller-manager-kubernetes-node1   1/1     Running   0          20m
kube-system   kube-proxy-9zxw6                           1/1     Running   0          21m
kube-system   kube-scheduler-kubernetes-node1            1/1     Running   0          20m
kube-system   weave-net-8489k                            2/2     Running   0          21s

coredns it’s now running

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.