成功最有效的方法就是向有经验的人学习!

k8s部署zookeeper集群

k8s以StatefulSet方式部署zookeeper集群:

zookeeper-headless.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  selector:
    app: zk
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None

---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  selector:
    app: zk
  ports:
  - port: 2181
    name: client

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: zk-config
data:
  ensemble: "zk-0;zk-1;zk-2"
  replicas: "3"
  jvm.heap: "512M"
  tick: "2000"
  init: "10"
  sync: "5"
  client.cnxns: "60"
  snap.retain: "3"
  purge.interval: "1"

---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  minAvailable: 2

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: zk
        image: registry.cn-hangzhou.aliyuncs.com/xianlu/k8szk:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        resources:
          requests:
            cpu: "500m"
            memory: "512Mi"
        env:
        - name : ZK_ENSEMBLE
          valueFrom:
            configMapKeyRef:
              name: zk-config
              key: ensemble
        - name : ZK_REPLICAS
          valueFrom:
            configMapKeyRef:
              name: zk-config
              key: replicas
        - name : ZK_HEAP_SIZE
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: jvm.heap
        - name : ZK_TICK_TIME
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: tick
        - name : ZK_INIT_LIMIT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: init
        - name : ZK_SYNC_LIMIT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: tick
        - name : ZK_MAX_CLIENT_CNXNS
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: client.cnxns
        - name: ZK_SNAP_RETAIN_COUNT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: snap.retain
        - name: ZK_PURGE_INTERVAL
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: purge.interval
        - name: ZK_CLIENT_PORT
          value: "2181"
        - name: ZK_SERVER_PORT
          value: "2888"
        - name: ZK_ELECTION_PORT
          value: "3888"
        command:
        - sh
        - -c
        - zkGenConfig.sh && zkServer.sh start-foreground
        readinessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        volumeMounts:
        - name: data
          mountPath: /var/lib/zookeeper
      volumes:
      - name: data
        emptyDir: {}
      securityContext:
        runAsUser: 1000
        fsGroup: 1000

#  volumeClaimTemplates:
#  - metadata:
#      name: data
#    spec:
#      accessModes: [ "ReadWriteOnce" ]
#      storageClassName: "gluster-heketi-2"
#      resources:
#        requests:
#          storage: 2Gi

PodDisruptionBudget:
k8s可以为每个应用程序创建 PodDisruptionBudget 对象(PDB)。PDB 将限制在同一时间因自愿干扰导致的复制应用程序中宕机的 pod 数量。

可以通过两个参数来配置PodDisruptionBudget:

MinAvailable:表示最小可用POD数,表示应用POD集群处于运行状态的最小POD数量,或者是运行状态的POD数同总POD数的最小百分比

MaxUnavailable:表示最大不可用PO数,表示应用POD集群处于不可用状态的最大POD数,或者是不可用状态的POD数同总POD数的最大百分比

需要注意的是,MinAvailable参数和MaxUnavailable参数只能同时配置一个。

部署:

kubectl apply -f zookeeper-headless.yaml

kubectl get svc

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP             25d
zk-cs        ClusterIP   10.107.243.77   <none>        2181/TCP            29s
zk-hs        ClusterIP   None            <none>        2888/TCP,3888/TCP   29s

kubectl get pod

NAME   READY   STATUS    RESTARTS   AGE
zk-0   1/1     Running   0          92s
zk-1   1/1     Running   0          73s
zk-2   1/1     Running   0          48s

查看zookeeper配置:

kubectl exec zk-0 cat /opt/zookeeper/conf/zoo.cfg

#This file was autogenerated by k8szk DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/log
tickTime=2000
initLimit=10
syncLimit=2000
maxClientCnxns=60
minSessionTimeout= 4000
maxSessionTimeout= 40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=1
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888

查看集群状态:

kubectl exec zk-0 zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower

kubectl exec zk-1 zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: leader

kubectl exec zk-2 zkServer.sh status

ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: follower

可以看到:zk-1 是集群leader,zk-0 和 zk-2 是集群follower。

kubectl exec zk-0 cat /var/lib/zookeeper/data/myid
1

kubectl exec zk-1 cat /var/lib/zookeeper/data/myid
2

kubectl exec zk-2 cat /var/lib/zookeeper/data/myid
3

集群测试:

kubectl exec zk-0 zkCli.sh create /hello world

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
Created /hello

kubectl exec zk-1 zkCli.sh get /hello

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
cZxid = 0x100000002
world
ctime = Sat Jun 06 07:11:23 UTC 2020
mZxid = 0x100000002
mtime = Sat Jun 06 07:11:23 UTC 2020
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

kubectl exec zk-2 zkCli.sh get /hello

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world
cZxid = 0x100000002
ctime = Sat Jun 06 07:11:23 UTC 2020
mZxid = 0x100000002
mtime = Sat Jun 06 07:11:23 UTC 2020
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

在 zk-0 创建的数据在集群中所有的服务上都是可用的。

赞(0) 打赏
未经允许不得转载:陈桂林博客 » k8s部署zookeeper集群
分享到

大佬们的评论 抢沙发

全新“一站式”建站,高质量、高售后的一条龙服务

微信 抖音 支付宝 百度 头条 快手全平台打通信息流

橙子建站.极速智能建站8折购买虚拟主机

觉得文章有用就打赏一下文章作者

非常感谢你的打赏,我们将继续给力更多优质内容,让我们一起创建更加美好的网络世界!

支付宝扫一扫打赏

微信扫一扫打赏

登录

找回密码

注册