准备
1.1 生产环境部署Kubernetes集群分为两种方式
kubeadm
Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
二进制包
从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
1.2 安装要求
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
一台或多台机器,操作系统 CentOS7.x-86_x64
硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
集群中所有机器之间网络互通
可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
禁止swap分区
1.3 准备环境
软件环境:
软件 | 版本 |
---|---|
操作系统 | CentOS7.5_x64 |
Docker | 19-ce |
Kubernetes | 1.20.4 |
服务器整体规划:
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 10.21.20.6 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
k8s-master2 | 10.171.0.10 | kube-apiserver,kube-controller-manager,kube-scheduler |
k8s-master3 | 10.171.0.11 | kube-apiserver,kube-controller-manager,kube-scheduler |
k8s-node1 | 10.170.0.10 | kubelet,kube-proxy,docker etcd |
k8s-node2 | 10.170.0.16 | kubelet,kube-proxy,docker,etcd |
node3 | 10.171.0.12 | kubelet,kube-proxy,docker |
node4 | 10.171.0.13 | kubelet,kube-proxy,docker |
node5 | 10.171.0.14 | kubelet,kube-proxy,docker |
node6 | 10.171.0.15 | kubelet,kube-proxy,docker |
node7 | 10.171.0.16 | kubelet,kube-proxy,docker |
node8 | 10.171.0.17 | kubelet,kube-proxy,docker |
node9 | 10.171.0.18 | kubelet,kube-proxy,docker |
node10 | 10.171.0.19 | kubelet,kube-proxy,docker |
node11 | 10.171.0.20 | kubelet,kube-proxy,docker |
node12 | 10.171.0.21 | kubelet,kube-proxy,docker |
node13 | 10.171.0.22 | kubelet,kube-proxy,docker |
node14 | 10.171.0.23 | kubelet,kube-proxy,docker |
node15 | 10.171.0.24 | kubelet,kube-proxy,docker |
node16 | 10.171.0.25 | kubelet,kube-proxy,docker |
node17 | 10.171.0.26 | kubelet,kube-proxy,docker |
node18 | 10.171.0.27 | kubelet,kube-proxy,docker |
node19 | 10.171.0.28 | kubelet,kube-proxy,docker |
node20 | 10.171.0.29 | kubelet,kube-proxy,docker |
node21 | 10.171.0.30 | kubelet,kube-proxy,docker |
Load Balancer(Master) | 10.171.0.31 ,10.171.0.9 (VIP) | Nginx L4 |
Load Balancer(Backup) | 10.171.0.32 | Nginx L4 |
本例我使用云商的LB,keepalived+nginx方式我也写在文档中参考吧
须知:考虑到有些朋友电脑配置较低,这么多虚拟机跑不动,所以这一套高可用集群分两部分实施,先部署一套单Master架构(10.21.20.6/10.170.0.10/10.170.0.16),再扩容为多Master架构(上述规划),顺便熟悉下Master扩容流程。
单Master架构图:
单Master服务器规划:
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 10.21.20.6 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
k8s-node1 | 10.170.0.10 | kubelet,kube-proxy,docker etcd |
k8s-node2 | 10.170.0.16 | kubelet,kube-proxy,docker,etcd |
1.4 操作系统初始化配置
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 根据规划设置主机名
hostnamectl set-hostname <hostname>
# 在master添加hosts
cat >> /etc/hosts << EOF
10.21.20.6 k8s-master1
10.170.0.10 k8s-node1
10.170.0.16 k8s-node2
EOF
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 时间同步
yum install ntpdate -y
ntpdate ntp.aliyun.com
#加载ipvs模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
lsmod | grep ip_vs
lsmod | grep nf_conntrack_ipv4
yum install -y ipvsadm
[root@k8s-master1 ~]# lsmod | grep ip_vs
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 139264 2 ip_vs,nf_conntrack_ipv4
libcrc32c 12644 2 ip_vs,nf_conntrack
二、部署Etcd集群
Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
节点名称 | IP |
---|---|
etcd-1 | 10.21.20.6 |
etcd-2 | 10.170.0.10 |
etcd-3 | 10.170.0.16 |
注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。
2.1 准备cfssl证书生成工具
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。
找任意一台服务器操作,这里用Master节点。
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.2 生成Etcd证书
1. 自签证书颁发机构(CA)
创建工作目录:
mkdir -p /opt/TLS/{etcd,k8s}
cd TLS/etcd
配置ca请求
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "k8s",
"OU": "system"
}
],
"ca": {
"expiry": "87600h"
}
}
EOF
注:
CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
创建ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
配置ca证书策略
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
配置etcd请求csr文件
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.21.20.6",
"10.170.0.10",
"10.170.0.16",
"10.171.0.10",
"10.171.0.11"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "k8s",
"OU": "system"
}]
}
EOF
注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@k8s-master1 etcd]# ls etcd*.pem
etcd-key.pem etcd.pem
2.3 从Github下载二进制文件
下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
2.4 部署Etcd集群
以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3.
1. 创建工作目录并解压二进制包
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar xvf etcd-v3.4.13-linux-amd64.tar.gz
mv etcd-v3.4.13-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2. 创建etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.21.20.6:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.21.20.6:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.21.20.6:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.21.20.6:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.21.20.6:2380,etcd2=https://10.170.0.10:2380,etcd3=https://10.170.0.16:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
- ETCD_NAME:节点名称,集群中唯一
- ETCD_DATA_DIR:数据目录
- ETCD_LISTEN_PEER_URLS:集群通信监听地址
- ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
- ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
- ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
- ETCD_INITIAL_CLUSTER:集群节点地址
- ETCD_INITIAL_CLUSTER_TOKEN:集群Token
- ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
3. systemd管理etcd
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/etcd.pem \
--key-file=/opt/etcd/ssl/etcd-key.pem \
--peer-cert-file=/opt/etcd/ssl/etcd.pem \
--peer-key-file=/opt/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--logger=zap \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
cp etcd/ca*pem etcd/etcd*pem /opt/etcd/ssl/
ll /opt/etcd/ssl/
total 16
-rw------- 1 root root 1675 Mar 17 15:19 ca-key.pem
-rw-r--r-- 1 root root 1363 Mar 17 15:19 ca.pem
-rw------- 1 root root 1675 Mar 17 15:19 etcd-key.pem
-rw-r--r-- 1 root root 1456 Mar 17 15:19 etcd.pem
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
6. 将上面节点1所有生成的文件拷贝到节点2和节点3
for i in k8s-node1 k8s-node2;do rsync -vaz /opt/etcd root@$i:/opt/;done
for i in k8s-node1 k8s-node2;do rsync -vaz /usr/lib/systemd/system/etcd.service root@$i:/usr/lib/systemd/system/;done
然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:
#[Member]
ETCD_NAME="etcd1" # 修改此处,节点2改为etcd2,节点3改为etcd3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.21.20.6:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://10.21.20.6:2379,http://127.0.0.1:2379" # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.21.20.6:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://10.21.20.6:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd1=https://10.21.20.6:2380,etcd2=https://10.170.0.10:2380,etcd3=https://10.170.0.16:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
最后启动etcd并设置开机启动,同ETCD1。
7. 查看集群状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://10.21.20.6:2379,https://10.170.0.10:2379,https://10.170.0.10:2379" endpoint health
https://10.21.20.6:2379 is healthy: successfully committed proposal: took = 16.043031ms
https://10.170.0.10:2379 is healthy: successfully committed proposal: took = 16.703072ms
https://10.170.0.10:2379 is healthy: successfully committed proposal: took = 19.600341ms
如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message
或 journalctl -u etcd
,注意检查/var/lib/etcd/default.etcd
权限
四、部署Master Node
如果你在学习中遇到问题或者文档有误可联系我~ 微信: guilin_20
4.1 生成kube-apiserver证书
创建csr请求文件
cat > kube-apiserver-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"172.135.0.1",
"10.21.20.6",
"10.170.0.10",
"10.170.0.16",
"10.171.0.9",
"10.171.0.10",
"10.171.0.11",
"10.171.0.12",
"10.171.0.13",
"10.171.0.14",
"10.171.0.15",
"10.171.0.16",
"10.171.0.17",
"10.171.0.18",
"10.171.0.19",
"10.171.0.20",
"10.171.0.21",
"10.171.0.22",
"10.171.0.23",
"10.171.0.24",
"10.171.0.25",
"10.171.0.26",
"10.171.0.27",
"10.171.0.28",
"10.171.0.29",
"10.171.0.30",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "k8s",
"OU": "system"
}
]
}
EOF
注:
如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。
由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 172.135.0.1)
4.2生成证书和token文件
启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping 工作流程:
创建上述配置文件中token文件:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
格式:token,用户名,UID,用户组
复制token文件
cp token.csv /opt/kubernetes/cfg/
ll kube-apiserver*pem token.csv
-rw------- 1 root root 1679 Mar 17 20:18 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1818 Mar 17 20:18 kube-apiserver.pem
-rw-r--r-- 1 root root 84 Mar 17 20:18 token.csv
注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
4.3 从Github下载二进制文件
下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#server-binaries
https://dl.k8s.io/v1.20.4/kubernetes-server-linux-amd64.tar.gz
注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。
4 4.3 解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
4.5 部署kube-apiserver
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
您暂时无权查看此隐藏内容!
--v=4"
EOF
注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ # 1.20以上版本必须有此参数
--service-account-issuer=https://kubernetes.default.svc.cluster.local \ # 1.20以上版本必须有此参数
- –logtostderr:启用日志
- –v:日志等级
- –log-dir:日志目录
- –etcd-servers:etcd集群地址
- –bind-address:监听地址
- –secure-port:https安全端口
- –advertise-address:集群通告地址
- –allow-privileged:启用授权
- –service-cluster-ip-range:Service虚拟IP地址段
- –enable-admission-plugins:准入控制模块
- –authorization-mode:认证授权,启用RBAC授权和节点自管理
- –enable-bootstrap-token-auth:启用TLS bootstrap机制
- –token-auth-file:bootstrap token文件
- –service-node-port-range:Service nodeport类型默认分配端口范围
- –kubelet-client-xxx:apiserver访问kubelet客户端证书
- –tls-xxx-file:apiserver https证书
- –etcd-xxxfile:连接Etcd集群证书
- –audit-log-xxx:审计日志
2. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
cp TLS/etcd/ca*pem TLS/etcd/kube-apiserver*pem /opt/kubernetes/ssl/
4. systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
4.5 部署kubectl
4.5.1创建csr请求文件
cat > admin-csr.json << EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:masters",
"OU": "system"
}
]
}
EOF
说明:
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;
注:
这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;
“O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
4.5.2生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
cp admin*.pem /opt/kubernetes/ssl/
4.5.3创建kubeconfig配置文件
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.21.20.6:6443 --kubeconfig=kube.config #高可用下这里写VIP
设置客户端认证参数
kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --client-key=/opt/kubernetes/ssl/admin-key.pem --embed-certs=true --kubeconfig=kube.config
设置上下文参数
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kube.config
mkdir ~/.kube
cp kube.config ~/.kube/config
授权kubernetes证书访问kubelet api权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
4.5.4查看集群组件状态
上面步骤完成后,kubectl就可以与kube-apiserver通信了
kubectl cluster-info
kubectl get componentstatuses
kubectl get all --all-namespaces
4.6 部署kube-controller-manager
4.6.1 创建csr请求文件
证书中需要列出所有master的IP
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"10.21.20.6",
"10.171.0.10",
"10.171.0.11"
],
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
EOF
注:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
4.6.2 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
ls kube-controller-manager*.pem
cp kube-controller-manager*.pem /opt/kubernetes/ssl/
4.6.3 创建kube-controller-manager的kubeconfig
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://10.21.20.6:6443 --kubeconfig=kube-controller-manager.kubeconfig
设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager --client-certificate=/opt/kubernetes/ssl/kube-controller-manager.pem --client-key=/opt/kubernetes/ssl/kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
设置上下文参数
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
设置默认上下文
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
cp kube-controller-manager.kubeconfig /opt/kubernetes/cfg/
4.6.4 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \\
--port=10252 \\
您暂时无权查看此隐藏内容!
--log-dir=/opt/kubernetes/logs/ \\
--v=2"
EOF
4.6.5 创建启动文件
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
4.6.6 启动服务
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
4.7 部署kube-scheduler
4.7.1 创建csr请求文件
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"10.21.20.6",
"10.171.0.10",
"10.171.0.11"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
EOF
注:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
4.7.2 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
ls kube-scheduler*.pem
4.7.3 创建kube-scheduler的kubeconfig
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://10.21.20.6:6443 --kubeconfig=kube-scheduler.kubeconfig
设置客户端认证参数
kubectl config set-credentials system:kube-scheduler --client-certificate=/opt/kubernetes/ssl/kube-scheduler.pem --client-key=/opt/kubernetes/ssl/kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
设置上下文参数
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
设置默认上下文
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
4.7.4 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/opt/kubernetes/logs/ \\
--v=2"
EOF
4.7.5 创建服务启动文件
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
同步相关文件到各个节点
cp kube-scheduler.kubeconfig /opt/kubernetes/cfg/
4.7.6 启动服务
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
4.8 部署kubelet
以下操作在master1上操作
4.8.1 创建kubelet-bootstrap.kubeconfig
您暂时无权查看此隐藏内容!
设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.21.20.6:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
设置上下文参数
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
创建角色绑定
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
4.8.2 创建配置文件
cat > /opt/kubernetes/cfg/kubelet.json << EOF
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/opt/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "10.21.20.6",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["172.135.0.2"]
}
EOF
生成kubelet配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--bootstrap-kubeconfig=/opt/kubernetes/cfg/kubelet-bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.json \\
--network-plugin=cni \\
--pod-infra-container-image=guilin2014/pause:3.2 \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
EOF
4.8.3 创建启动文件
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
注:
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
同步相关文件到各个节点
cp kubelet-bootstrap.kubeconfig /opt/kubernetes/cfg/
cp kubelet.json /opt/kubernetes/cfg/
cp kubelet.service /usr/lib/systemd/system/
以上步骤,如果master节点不安装kubelet,则不用执行
注:kubelete.json配置文件address改为各个节点的ip地址
启动服务
各个work节点上操作
mkdir /var/lib/kubelet
cp kubelet /opt/kubernetes/bin/
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求:
kubectl get csr
部署kube-proxy
创建csr请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "k8s",
"OU": "system"
}
]
}
EOF
生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*.pem
创建kubeconfig文件
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.21.20.6:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
创建kube-proxy配置文件
cat > /opt/kubernetes/cfg/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 10.21.20.6
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
clusterCIDR: 172.244.0.0/16
healthzBindAddress: 10.21.20.6:10256
kind: KubeProxyConfiguration
metricsBindAddress: 10.21.20.6:10249
mode: "ipvs"
EOF
创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--config=/opt/kubernetes/cfg/kube-proxy.yaml \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/opt/kubernetes/logs \\
--v=2"
EOF
创建服务启动文件
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
同步文件到各个节点
cp kube-proxy*pem /opt/kubernetes/ssl/
cp kube-proxy /opt/kubernetes/bin/
注:配置文件kube-proxy.yaml中address修改为各节点的实际IP
启动服务
mkdir -p /var/lib/kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy
3.4.10 配置网络组件
wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
此时再来查看各个节点,均为Ready状态
kubectl get pods -A
kubectl get nodes
您暂时无权查看此隐藏内容!
kubectl apply -f calico.yaml
部署coredns
下载coredns yaml文件:https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
修改yaml文件:
kubernetes cluster.local in-addr.arpa ip6.arpa
forward . /etc/resolv.conf
clusterIP为:172.135.0.2(kubelet配置文件中的clusterDNS)
#大约在73行去掉`STUBDOMAINS`
添加工作节点
新节点环境准备
mkdir /opt/kubernetes/{bin,cfg,logs,ssl} -p
mkdir -p /var/lib/kubelet
mkdir -p /var/lib/kube-proxy
yum install conntrack -y
yum install nfs-utils -y
echo 'fs.inotify.max_user_instances=8192' >> /etc/sysctl.conf && sysctl -p
准备基础镜像
您暂时无权查看此隐藏内容!
从master节点复制文件到新的工作节点
cd /opt/kubernetes/
您暂时无权查看此隐藏内容!
修改配置
kubelete.json中的IP改为本机IP
在新的节点中删除kubelet证书
rm kubelet-client-* -f
systemctl daemon-reload
systemctl start kube-proxy.service
systemctl enable kube-proxy.service
systemctl start kubelet.service
systemctl enable kubelet.service
接受证书申请请求
kubectl get csr
kubectl certificate approve node-csr-2WWgVVtvXru5lK8US9nW_JrmGTOaRC3FDr_U9y2Nfyc
学习中….
加油