rook使用教程,快速編排ceph
安裝
git clone https://github.com/rook/rook cd cluster/examples/kubernetes/ceph kubectl create -f operator.yaml
檢視operator是否成功:
[root@dev-86-201 ~]# kubectl get pod -n rook-ceph-system NAMEREADYSTATUSRESTARTSAGE rook-ceph-agent-5z6p71/1Running088m rook-ceph-agent-6rj7l1/1Running088m rook-ceph-agent-8qfpj1/1Running088m rook-ceph-agent-xbhzh1/1Running088m rook-ceph-operator-67f4b8f67d-tsnf21/1Running088m rook-discover-5wghx1/1Running088m rook-discover-lhwvf1/1Running088m rook-discover-nl5m21/1Running088m rook-discover-qmbx71/1Running088m
然後建立ceph叢集:
kubectl create -f cluster.yaml
檢視ceph叢集:
[root@dev-86-201 ~]# kubectl get pod -n rook-ceph NAMEREADYSTATUSRESTARTSAGE rook-ceph-mgr-a-8649f78d9b-jklbv1/1Running064m rook-ceph-mon-a-5d7fcfb6ff-2wq9l1/1Running081m rook-ceph-mon-b-7cfcd567d8-lkqff1/1Running080m rook-ceph-mon-d-65cd79df44-66rgz1/1Running079m rook-ceph-osd-0-56bd7545bd-5k9xk1/1Running063m rook-ceph-osd-1-77f56cd549-7rm4l1/1Running063m rook-ceph-osd-2-6cf58ddb6f-wkwp61/1Running063m rook-ceph-osd-3-6f8b78c647-8xjzv1/1Running063m
引數說明:
apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: # For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags image: ceph/ceph:v13.2.2-20181023 dataDirHostPath: /var/lib/rook # 資料盤目錄 mon: count: 3 allowMultiplePerNode: true dashboard: enabled: true storage: useAllNodes: true useAllDevices: false config: databaseSizeMB: "1024" journalSizeMB: "1024"
訪問ceph dashboard:
[root@dev-86-201 ~]# kubectl get svc -n rook-ceph NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE rook-ceph-mgrClusterIP10.98.183.33<none>9283/TCP66m rook-ceph-mgr-dashboardNodePort10.103.84.48<none>8443:31631/TCP66m# 把這個改成NodePort模式 rook-ceph-mon-aClusterIP10.99.71.227<none>6790/TCP83m rook-ceph-mon-bClusterIP10.110.245.119<none>6790/TCP82m rook-ceph-mon-dClusterIP10.101.79.159<none>6790/TCP81m
然後訪問 https://10.1.86.201 :31631 即可
管理賬戶admin,獲取登入密碼:
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode
使用
建立pool
apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool# operator會監聽並建立一個pool,執行完後介面上也能看到對應的pool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block# 這裡建立一個storage class, 在pvc中指定這個storage class即可實現動態建立PV provisioner: ceph.rook.io/block parameters: blockPool: replicapool # The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist clusterNamespace: rook-ceph # Specify the filesystem type of the volume. If not specified, it will use `ext4`. fstype: xfs # Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/ reclaimPolicy: Retain
建立pvc
在cluster/examples/kubernetes 目錄下,官方給了個worldpress的例子,可以直接執行一下:
kubectl create -f mysql.yaml kubectl create -f wordpress.yaml
檢視PV PVC:
[root@dev-86-201 ~]# kubectl get pvc NAMESTATUSVOLUMECAPACITYACCESS MODESSTORAGECLASSAGE mysql-pv-claimBoundpvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde20GiRWOrook-ceph-block144m wp-pv-claimBoundpvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde20GiRWOrook-ceph-block144m [root@dev-86-201 ~]# kubectl get pv NAMECAPACITYACCESS MODESRECLAIM POLICYSTATUSCLAIMSTORAGECLASSREASONAGE pvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde20GiRWORetainBounddefault/mysql-pv-claimrook-ceph-block145m pvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde20GiRWORetainBounddefault/wp-pv-claimrook-ceph-block145m
看下yaml檔案:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: wordpress spec: storageClassName: rook-ceph-block# 指定storage class accessModes: - ReadWriteOnce resources: requests: storage: 20Gi# 需要一個20G的盤 ... volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim# 指定上面定義的PVC
是不是非常簡單。
要訪問wordpress的話請把service改成NodePort型別,官方給的是loadbalance型別:
kubectl edit svc wordpress [root@dev-86-201 kubernetes]# kubectl get svc NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE wordpressNodePort10.109.30.99<none>80:30130/TCP148m
總結
分散式儲存在容器叢集中充當非常重要的角色,使用容器叢集一個非常重要的理念就是把叢集當成一個整體使用,如果你在使用中還關心單個主機,比如排程到某個節點,
掛載某個節點目錄等,必然會導致不能把雲的威力百分之百發揮出來。 一旦計算儲存分離後,就可真正實現隨意漂移,對叢集維護來說是個極大的福音。
比如叢集機器過保了需要下架,那麼我們雲化的架構因為所有東西無單點,所以只需要簡單驅逐改節點,然後下架即可,不用關心上面跑的是什麼業務,不管是有狀態還是無
狀態的都可以自動修復。 不過目前面臨最大的挑戰可能還是分散式儲存的效能問題。 在效能要求不苛刻的場景下我是極推薦這種計算儲存分離架構的。
探討可加QQ群:98488045
公眾號:
微信群: