公司最近有一個內網搭建k8s集群的項目。因為客戶的環境是內網環境,由于環境限制,需要實現離線安裝kubernetes集群。這里采用kubeadm實現集群的部署。
一、部署環境信息
1、系統信息
CentOS7.1? 64位? 2臺
[root@localhost ~]# uname -a
Linux k8s-master 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
Master:? 7.7.0.23????????? Node1:? 7.7.0.24
2、設置三臺機器的主機名
Master上執行:
hostnamectl –static set-hostname k8s-master
Node上執行:
hostnamectl –static set-hostname k8s-node-1
3、初始化系統環境
在兩臺機器上設置hosts,均執行如下命令:
echo -e ‘7.7.0.23 k8s-master\n7.7.0.23 etcd\n7.7.0.23 registry\n7.7.0.24 k8s-node-1’ >> /etc/hosts
關閉兩臺機器上的防火墻
systemctl stop firewalld.service
systemctl disable firewalld.service
關閉兩臺主機的selinux
臨時關閉:setenforce 0
永久關閉(需要重啟):
sed -i“s@SELINUX=enforcing@SELINUX=disabled@”/etc/selinux/config
同步主機的時間(需root權限):
示例:date -s “2018-06-20 01:01:01”
時間不同步,在node加入集群時會有報錯:
[discovery] Failed to request cluster info, will try again: [Get https://7.7.0.23:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
二、部署Master
1、安裝docker
上傳docker-offline.tar.gz壓縮包,解壓并執行安裝腳本
[root@k8s-master ~]# tar xvf ?docker-offline.tar.gz
[root@k8s-master ~]# cd docker-offline
[root@k8s-master docker-offline]# ./docker-install.sh
[root@k8s-master docker-offline]#cat docker-install.sh
#!/bin/bash
basedir=`pwd`
mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/Cent* /etc/yum.repos.d/bak
cp $basedir/docker.repo /etc/yum.repos.d/
sed -i “s@baseurl=file://@baseurl=file://$basedir/dockerRpm@” /etc/yum.repos.d/docker.repo
yum clean all && yum makecache fast
yum install docker-ce -y
mv /etc/yum.repos.d/bak/* /etc/yum.repos.d/
rm -rf /etc/yum.repos.d/bak
在這里要說一下如何創建本地的docker源:
[root@k8s-master docker-offline]# systemctl start docker && systemctl enable docker
[root@k8s- master docker-offline]# docker version
Client:
Version:????? 17.03.2-ce
API version:? 1.27
Go version:?? go1.7.5
Git commit:?? f5ec1e2
Built:??????? Tue Jun 27 02:21:36 2017
OS/Arch:????? linux/amd64
Server:
Version:????? 17.03.2-ce
API version:? 1.27 (minimum version 1.12)
Go version:?? go1.7.5
Git commit:?? f5ec1e2
Built:??????? Tue Jun 27 02:21:36 2017
OS/Arch:????? linux/amd64
Experimental: false
2、設置內核參數
(主要是為了避免 RHEL/CentOS 7系統下出現路由異常):
[root@k8s- master docker-offline]#echo -e “net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\n net.ipv4.ip_forward=1 ” >> /etc/sysctl.conf
[root@k8s- master docker-offline]# sysctl -p
3、上傳k8s集群鏡像
鏡像及安裝包:鏈接:https://pan.baidu.com/s/1MeRXs4Gk65xE-RSnHcRgVw 密碼:hqco
[root@k8s-master ]# tar xvf k8s_images.tar.gz
[root@k8s-master ]# cd k8s_images/docker_images
[root@k8s-master docker_images]# for i in `ll | awk ‘{print$9}’`;do docker load < $i;done
完成后,可以看到鏡像已經準備完畢:
[root@k8s-master]# docker images
安裝kubernetes
[root@k8s-master docker_images]# cd ../
[root@k8s-master k8s_images]# rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
[root@k8s-master k8s_images]#rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm \
kubelet-1.9.9-9.x86_64.rpm \
kubectl-1.9.0-0.x86_64.rpm \
kubeadm-1.9.0-0.x86_64.rpm
[root@k8s-master k8s_images]# rpm -qa | grep kube
kubelet-1.9.0-0.x86_64
kubectl-1.9.0-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubeadm-1.9.0-0.x86_64
kubelet默認的cgroup的driver和docker的不一樣,docker默認的cgroupfs,kubelet默認為systemd,因此我們要修改成一致。在虛擬機上部署k8s 1.9版本需要關閉操作系統交換分區
[root@k8s-master k8s_images]# swapoff -a
[root@k8s-master k8s_images]# sed -i ‘s@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=systemd”@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs”@’ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[root@k8s-master k8s_images]# systemctl daemon-reload
啟動kubelet,并設置為開啟自啟
[root@k8s-master k8s_images]# systemctl start kubelet && systemctl enable kubelet
4、初始化集群
K8s支持多種網絡插件如flannel、weave、calico,這里我們使用flannel,需要設置–pod-network-cidr參數,10.244.0.0/16是kube-flannel.yml文件配置的默認網段,可以自定義,如果需要修改,–pod-network-cidr和kube-flannel.yml文件需要保持一致。
為了使用kubelet訪問apiserver,添加環境變量:
[root@k8s-master k8s_images]# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
[root@k8s-master k8s_images]# source ?~/.bash_profile
初始化集群
[root@k8s-master k8s_images]# kubeadm init –kubernetes-version=v1.9.0 –pod-network-cidr=10.244.0.0/16
記住此信息:kubeadm join –token 84a65f.0fdac91a5852510c 7.7.0.23:6443 –discovery-token-ca-cert-hash sha256:0d78812defc7fb554ad7a7c9bfad194cccb82817a69c9be554d776f976ed772d
用于node加入集群
token 24小時后過期,超過時間需要重新獲取
token重新獲?。?/p>
在master主機上— kubeadm token list
或者: kubeadm token create
sha256獲?。?/p>
在master主機上:openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’
查看kubernetes安裝是否成功
[root@k8s-master k8s_images]# kubectl version
(如果初始化失敗需要重新進行初始化,需要先進行reset一下 kubeadm reset)
5、在master節點上部署網絡插件flannel
[root@k8s-master k8s_images]# kubectl create -f kube-flannel.yml
clusterrole “flannel” created
clusterrolebinding “flannel” created
serviceaccount “flannel” created
configmap “kube-flannel-cfg” created
daemonset “kube-flannel-ds” created
三、部署node節點
1、安裝docker
上傳docker-offline.tar.gz,安裝docker:
bash ./ docker-install.sh
啟動docker:
[root@k8s-node-1 docker-offline]#systemctl start docker && systemctl enable docker
[root@k8s-node-1 docker-offline]# echo -e “net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\n net.ipv4.ip_forward=1 ” >> /etc/sysctl.conf
[root@k8s-node-1 docker-offline]# sysctl -p
2、安裝kubernetes
上傳k8s_images.tar.gz,安裝kubernetes:
[root@k8s-node-1 ]# tar xvf k8s_images.tar.gz
[root@k8s-node-1 ]# cd k8s_images/docker_images
[root@k8s-node-1 docker_images]# for i in `ll | awk ‘{print$9}’`;do docker load < $i;done
[root@k8s-node-1 k8s_images]# rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
[root@k8s-node-1 k8s_images]# rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm \
kubelet-1.9.9-9.x86_64.rpm? \
kubectl-1.9.0-0.x86_64.rpm \
kubeadm-1.9.0-0.x86_64.rpm
3、配置kubelet
[root@k8s-node-1 k8s_images]# swapoff -a
[root@k8s-node-1 k8s_images]# sed -i ‘s@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=systemd”@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs”@’ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[root@k8s-node-1 k8s_images]# systemctl daemon-reload
[root@k8s-node-1 k8s_images]#systemctl start kubelet && systemctl enable kubelet
四、Node加入Cluster
將node節點加入cluster集群中
[root@k8s-node-1 k8s_images]# kubeadm join –token 84a65f.0fdac91a5852510c 7.7.0.23:6443 –discovery-token-ca-cert-hash sha256:0d78812defc7fb554ad7a7c9bfad194cccb82817a69c9be554d776f976ed772d(此為master初始化后記錄的)
五、查看集群狀態
master主機上查看node狀況:
[root@k8s-node-1 k8s_images]# kubectl get nodes
在master主機上查看k8s集群相關pod運行情況:
[root@k8s-master]# kubectl get pods? –all-namespaces
本文來自投稿,不代表Linux運維部落立場,如若轉載,請注明出處:http://www.www58058.com/102207