Ubuntu18 搭建k8s测试环境

K8s安装本地测试环境

服务器环境

节点 系统 IP
k8s-master ubuntu18.04 10.10.2.10
k8s-node-01 ubuntu18.04 10.10.2.11
k8s-node-02 ubuntu18.04 10.10.2.12
ntfs ubuntu18.04 10.10.2.21

安装docker

1
2
3
4
$ sudo apt install docker.io
$ systemctl damon-reload
$ systemctl start docker
$ systemctl enable docker
  • 非管理员用户需要加入docker分组
1
2
3
oni@k8s-master:~$ sudo gpasswd -a $USER docker
Adding user oni to group docker
oni@k8s-master:~$ newgrp docker

kubeadm, kubelet 和 kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 首先需要关闭系统swap
$ sudo swapoff -a // 重启会失效
# 注释或者删掉swap 永久关闭
$ sudo vim /etc/fstab

$ sudo apt install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo su
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ apt update
$ apt install -y kubelet kubeadm kubectl
$ apt-mark hold kubelet kubeadm kubectl

k8s-master

k8s镜像因为墙的原因拉取不到, 可以自己搭建镜像服务器或者使用第三方国内源
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 这里是测试环境镜像使用第三方镜像
#!/bin/bash

images=(kube-apiserver:v1.14.1 kube-controller-manager:v1.14.1 kube-scheduler:v1.14.1 kube-proxy:v1.14.1 pause:3.1 etcd:3.3.10)

for image in ${images[@]} ; do
docker pull mirrorgooglecontainers/$image
docker tag mirrorgooglecontainers/$image k8s.gcr.io/$image
docker rmi mirrorgooglecontainers/$image
done

docker pull coredns/coredns:1.3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi coredns/coredns:1.3.1
1
2
3

# 初始化安装
oni@k8s-master:~$ sudo kubeadm init --pod-network-cidr=172.168.10.0/24
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.2.10:6443 --token vf9okh.uwp98yrdzxowa980 \
--discovery-token-ca-cert-hash sha256:2d4da35dc030056d7a02d9634d74477d8c20698dab6b20e22fe0c7fec4298f33

创建提示的kube配置

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

记录kubeadm join 10.10.2.10:6443 --token vf9okh.uwp98yrdzxowa980 --discovery-token-ca-cert-hash sha256:2d4da35dc030056d7a02d9634d74477d8c20698dab6b20e22fe0c7fec4298f33 用于Cluster节点连接master

1
2
3
oni@k8s-master:~/yaml/install$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 10m v1.14.1

master节点NotReady是因为还没有部署pod

1
2
3
4
5
6
7
8
9
10
11
12
# 安装flannel
k8s-master:~$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

==注意==:oni@k8s-master:~$ sudo kubeadm init --pod-network-cidr=172.168.10.0/24–pod-network-cidr设置的IP需要与flannel一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# flannel安装完成 
oni@k8s-master:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-64vhd 1/1 Running 0 40m
coredns-fb8b8dccf-94tll 1/1 Running 0 40m
etcd-k8s-master 1/1 Running 1 40m
kube-apiserver-k8s-master 1/1 Running 1 39m
kube-controller-manager-k8s-master 1/1 Running 1 40m
kube-flannel-ds-amd64-wkvcq 1/1 Running 0 27m
kube-proxy-znkfb 1/1 Running 1 40m
kube-scheduler-k8s-master 1/1 Running 1 40m
oni@k8s-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 41m v1.14.1

设置ipvs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
oni@k8s-master:~$ kubectl edit configmaps kube-proxy -n kube-system

apiVersion: v1
data:
config.conf: |-
...
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs" # 设置mode为ipvs
1
2
3
4
5
6
7
8
9
10
11
12
# 查看ipvs设置是否成功 Using ipvs Proxier
oni@k8s-master:~$ kubectl logs kube-proxy-znkfb -n kube-system
W0428 09:18:02.244497 1 node.go:113] Failed to retrieve node info: Get https://10.10.2.10:6443/api/v1/nodes/k8s-master: dial tcp 10.10.2.10:6443: connect: connection refused
I0428 09:18:02.244591 1 server_others.go:177] Using ipvs Proxier.
W0428 09:18:02.245157 1 proxier.go:366] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
W0428 09:18:02.245173 1 proxier.go:381] IPVS scheduler not specified, use rr by default
I0428 09:18:02.245352 1 server.go:555] Version: v1.14.1
I0428 09:18:02.257368 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0428 09:18:02.258516 1 config.go:102] Starting endpoints config controller
I0428 09:18:02.258542 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0428 09:18:02.258576 1 config.go:202] Starting service config controller
I0428 09:18:02.258589 1 controller_utils.go:1027] Waiting for caches to sync for service config controller

安装ipvsadm管理 apt install ipvsadm 查看路由ipvsadm -n

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
oni@k8s-master:~$ sudo ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 10.10.2.10:6443 Masq 1 2 0
TCP 10.96.0.10:53 rr
-> 172.168.10.5:53 Masq 1 0 0
-> 172.168.10.6:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 172.168.10.5:9153 Masq 1 0 0
-> 172.168.10.6:9153 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 172.168.10.5:53 Masq 1 0 0
-> 172.168.10.6:53 Masq 1 0 0

Cluster 节点设置

1
2
oni@k8s-node-01:~$ sudo kubeadm join 10.10.2.10:6443 --token vf9okh.uwp98yrdzxowa980 \
--discovery-token-ca-cert-hash sha256:2d4da35dc030056d7a02d9634d74477d8c20698dab6b20e22fe0c7fec4298f33
1
2
oni@k8s-node-02:~$ sudo kubeadm join 10.10.2.10:6443 --token vf9okh.uwp98yrdzxowa980 \
--discovery-token-ca-cert-hash sha256:2d4da35dc030056d7a02d9634d74477d8c20698dab6b20e22fe0c7fec4298f33

Master 查看节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
oni@k8s-master:~$ kubectl get node 
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 75m v1.14.1
k8s-node-01 NotReady <none> 37s v1.14.1
k8s-node-02 NotReady <none> 14s v1.14.1
oni@k8s-master:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-fb8b8dccf-k7j74 1/1 Running 0 18m 172.168.10.8 k8s-master <none> <none>
kube-system coredns-fb8b8dccf-lxf6v 1/1 Running 0 18m 172.168.10.9 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 5 76m 10.10.2.10 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 4 76m 10.10.2.10 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 4 76m 10.10.2.10 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-5l2mm 0/1 Init:0/1 0 2m27s 10.10.2.11 k8s-node-01 <none> <none>
kube-system kube-flannel-ds-amd64-8fqzz 0/1 Init:0/1 0 2m4s 10.10.2.12 k8s-node-02 <none> <none>
kube-system kube-flannel-ds-amd64-wkvcq 1/1 Running 0 64m 10.10.2.10 k8s-master <none> <none>
kube-system kube-proxy-2hbcf 0/1 ContainerCreating 0 2m27s 10.10.2.11 k8s-node-01 <none> <none>
kube-system kube-proxy-st2tc 0/1 ContainerCreating 0 2m4s 10.10.2.12 k8s-node-02 <none> <none>
kube-system kube-proxy-znkfb 1/1 Running 0 77m 10.10.2.10 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 4 76m 10.10.2.10 k8s-master <none> <none>

ContainerCreating / Init:0/1 容器会在创建完成后自动start 变成Running

安装完成

image
image