原 薦 Ceph學習筆記2-在Kolla-Ansible中使用Ceph後端儲存
摘要:
環境說明
使用Kolla-Ansible
請參考《使用Kolla-Ansible
在CentOS
7
單節點上部署OpenStack
Pike
》;
部署Ceph
服務請參考《...
環境說明
-
使用
Kolla-Ansible
請參考《使用Kolla-Ansible
在CentOS
7
單節點上部署OpenStack
Pike
》; -
部署
Ceph
服務請參考《Ceph
學習筆記1
-Mimic
版本多節點部署》。
配置Ceph
-
以
osdev
使用者登入:
$ ssh osdev@osdev01 $ cd /opt/ceph/deploy/
建立Pool
建立映象Pool
-
用於儲存
Glance
映象:
$ ceph osd pool create images 32 32 pool 'images' created
建立卷Pool
-
用於儲存
Cinder
的卷:
$ ceph osd pool create volumes 32 32 pool 'volumes' created
-
用於儲存
Cinder
的卷備份:
$ ceph osd pool create backups 32 32 pool 'backups' created
建立虛擬機器Pool
- 用於儲存虛擬機器系統卷:
$ ceph osd pool create vms 32 32 pool 'vms' created
檢視Pool
$ ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log 6 rbd 8 images 9 volumes 10 backups 11 vms
建立使用者
檢視使用者
- 檢視所有使用者:
$ ceph auth list installed auth entries: mds.osdev01 key: AQCabn5b18tHExAAkZ6Aq3IQ4/aqYEBBey5O3Q== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx mds.osdev02 key: AQCbbn5bcq4yJRAAUfhoqPNfyp2m/ORu/7vHBA== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx mds.osdev03 key: AQCcbn5bTAIdORAApGu9NJvC3AmS+L3EWXLMdw== caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx osd.0 key: AQCyJH5bG2ZBHRAAsDaLHcoOxv/mLCHwITA7JQ== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQDTJH5bjvQ8HxAA4cyLttvZwiqFq1srFoSXWg== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: AQD9JH5bbPi6IRAA7DbwaCh5JBaa6RfWPoe9VQ== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQA1In5boIRwGBAAgj5OccvTGYkuB+btlgL0BQ== caps: [mon] allow profile bootstrap-mds client.bootstrap-mgr key: AQA1In5bS6pwGBAA379v3LXJrdURLmA1gnTaLQ== caps: [mon] allow profile bootstrap-mgr client.bootstrap-osd key: AQA1In5bnMpwGBAAXohUfa4rGS0Rd2weMl4dPg== caps: [mon] allow profile bootstrap-osd client.bootstrap-rbd key: AQA1In5buelwGBAANQSalrSzH3yslSc4rYPu1g== caps: [mon] allow profile bootstrap-rbd client.bootstrap-rgw key: AQA1In5b0ghxGBAAIGK3WmBSkKZMnSEfvnEQow== caps: [mon] allow profile bootstrap-rgw client.rgw.osdev01 key: AQDZbn5b6aChEBAAzRuX4UWlxyws+aX1i+D26Q== caps: [mon] allow rw caps: [osd] allow rwx client.rgw.osdev02 key: AQDabn5bypCDJBAAt18L5ppG5lEg6NkGQLYs5w== caps: [mon] allow rw caps: [osd] allow rwx client.rgw.osdev03 key: AQDbbn5bbEVNNBAArX+/AKQu9q3hCRn/05Ya3A== caps: [mon] allow rw caps: [osd] allow rwx mgr.osdev01 key: AQDPIn5beqPTORAAEzcX3fMCCclLR2RiPyvugw== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.osdev02 key: AQDRIn5bLRVqDxAA/yWXO8pX6fQynJNyCcoNww== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * mgr.osdev03 key: AQDSIn5bGyrhHxAAvtAEOveovRxmdDlF45i2Cg== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow *
- 檢視指定使用者:
$ ceph auth get client.admin exported keyring for client.admin [client.admin] key = AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ== caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *"
建立Glance使用者
-
建立
glance
使用者,並授予images
儲存池訪問許可權:
$ ceph auth get-or-create client.glance [client.glance] key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ== $ ceph auth caps client.glance mon 'allow r' osd 'allow rwx pool=images' updated caps for client.glance
-
檢視並儲存
glance
使用者的KeyRing
檔案:
$ ceph auth get client.glance exported keyring for client.glance [client.glance] key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ== caps mon = "allow r" caps osd = "allow rwx pool=images" $ ceph auth get client.glance -o /opt/ceph/deploy/ceph.client.glance.keyring exported keyring for client.glance
建立Cinder使用者
-
建立
cinder-volume
使用者,並授予volumes
儲存池訪問許可權:
$ ceph auth get-or-create client.cinder-volume [client.cinder-volume] key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA== $ ceph auth caps client.cinder-volume mon 'allow r' osd 'allow rwx pool=volumes' updated caps for client.cinder-volume
-
檢視並儲存
cinder-volume
使用者的KeyRing
檔案:
$ ceph auth get client.cinder-volume exported keyring for client.cinder-volume [client.cinder-volume] key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA== caps mon = "allow r" caps osd = "allow rwx pool=volumes" $ ceph auth get client.cinder-volume -o /opt/ceph/deploy/ceph.client.cinder-volume.keyring exported keyring for client.cinder-volume
-
建立
cinder-backup
使用者,並授予volumes
和backups
儲存池訪問許可權:
$ ceph auth get-or-create client.cinder-backup [client.cinder-backup] key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og== $ ceph auth caps client.cinder-backup mon 'allow r' osd 'allow rwx pool=volumes, allow rwx pool=backups' updated caps for client.cinder-backup
-
檢視並儲存
cinder-backup
使用者的KeyRing
檔案:
$ ceph auth get client.cinder-backup exported keyring for client.cinder-backup [client.cinder-backup] key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og== caps mon = "allow r" caps osd = "allow rwx pool=volumes, allow rwx pool=backups" $ ceph auth get client.cinder-backup -o /opt/ceph/deploy/ceph.client.cinder-backup.keyring exported keyring for client.cinder-backup
建立Nova使用者
-
建立
nova
使用者,並授予vms
儲存池的訪問許可權:
$ ceph auth get-or-create client.nova [client.nova] key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A== $ ceph auth caps client.nova mon 'allow r' osd 'allow rwx pool=vms' updated caps for client.nova
-
檢視並儲存
nova
使用者的KeyRing
檔案:
$ ceph auth get client.nova exported keyring for client.nova [client.nova] key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A== caps mon = "allow r" caps osd = "allow rwx pool=vms" $ ceph auth get client.nova -o /opt/ceph/deploy/ceph.client.nova.keyring exported keyring for client.nova
配置Kolla-Ansible
-
以
root
使用者身份登入osdev01
部署節點,並設定好環境變數:
$ ssh root@osdev01 $ export KOLLA_ROOT=/opt/kolla $ cd ${KOLLA_ROOT}/myconfig
全域性配置
-
編輯
globals.yml
,禁止部署Ceph
:
enable_ceph: "no"
-
開啟
Cinder
服務,並開啟Glance
、Cinder
和Nova
的後端Ceph
功能:
enable_cinder: "yes" glance_backend_ceph: "yes" cinder_backend_ceph: "yes" nova_backend_ceph: "yes"
配置Glance
-
配置
Glance
使用glance
使用者使用Ceph
的images
儲存池:
$ mkdir -pv config/glance mkdir: 已建立目錄 "config/glance" $ vi config/glance/glance-api.conf [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
-
新增
Glance
的Ceph
客戶端配置和glance
使用者的KeyRing
檔案:
$ vi config/glance/ceph.conf [global] fsid = 383237bd-becf-49d5-9bd6-deb0bc35ab2a mon_initial_members = osdev01, osdev02, osdev03 mon_host = 172.29.101.166,172.29.101.167,172.29.101.168 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx $ cp -v /opt/ceph/deploy/ceph.client.glance.keyring config/glance/ceph.client.glance.keyring "/opt/ceph/deploy/ceph.client.glance.keyring" -> "config/glance/ceph.client.glance.keyring"
配置Cinder
-
配置
Cinder
卷服務使用Ceph
的cinder-volume
使用者使用volumes
儲存池,Cinder
卷備份服務使用Ceph
的cinder-backup
使用者使用backups
儲存池:
$ mkdir -pv config/cinder/ mkdir: 已建立目錄 "config/cinder/" $ vi config/cinder/cinder-volume.conf [DEFAULT] enabled_backends=rbd-1 [rbd-1] rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=cinder-volume backend_host=rbd:volumes rbd_pool=volumes volume_backend_name=rbd-1 volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_secret_uuid = {{ cinder_rbd_secret_uuid }} $ vi config/cinder/cinder-backup.conf [DEFAULT] backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool=backups backup_driver = cinder.backup.drivers.ceph backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
-
新增
Cinder
的卷服務和卷備份服務的Ceph
客戶端配置和KeyRing
檔案:
$ cp config/glance/ceph.conf config/cinder/ceph.conf $ mkdir -pv config/cinder/cinder-backup/ config/cinder/cinder-volume/ mkdir: 已建立目錄 "config/cinder/cinder-backup/" mkdir: 已建立目錄 "config/cinder/cinder-volume/" $ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-backup/ceph.client.cinder-volume.keyring "/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-volume.keyring" $ cp -v /opt/ceph/deploy/ceph.client.cinder-backup.keyring config/cinder/cinder-backup/ceph.client.cinder-backup.keyring "/opt/ceph/deploy/ceph.client.cinder-backup.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-backup.keyring" $ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-volume/ceph.client.cinder-volume.keyring "/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-volume/ceph.client.cinder.keyring"
配置Nova
-
配置
Nova
使用Ceph
的nova
使用者使用vms
儲存池:
$ vi config/nova/nova-compute.conf [libvirt] images_rbd_pool=vms images_type=rbd images_rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=nova
-
新增Nova的
Ceph
客戶端配置和nova
使用者的KeyRing
檔案:
$ cp -v config/glance/ceph.conf config/nova/ceph.conf "config/glance/ceph.conf" -> "config/nova/ceph.conf" $ cp -v /opt/ceph/deploy/ceph.client.nova.keyring config/nova/ceph.client.nova.keyring "/opt/ceph/deploy/ceph.client.nova.keyring" -> "config/nova/ceph.client.nova.keyring" $ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/nova/ceph.client.cinder.keyring "/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/nova/ceph.client.cinder.keyring"
部署測試
開始部署
-
編輯部署指令碼
osdev.sh
:
#!/bin/bash set -uexv usage() { echo -e "usage : \n$0 <action>" echo -e "\$1 action" } if [ $# -lt 1 ]; then usage exit 1 fi ${KOLLA_ROOT}/kolla-ansible/tools/kolla-ansible --configdir ${KOLLA_ROOT}/myconfig --passwords ${KOLLA_ROOT}/myconfig/passwords.yml --inventory ${KOLLA_ROOT}/myconfig/mynodes.conf $1
- 增加可執行許可權:
$ chmod a+x osdev.sh
-
部署
OpenStack
叢集:
$ ./osdev.sh bootstrap-servers $ ./osdev.sh prechecks $ ./osdev.sh pull $ ./osdev.sh deploy $ ./osdev.sh post-deploy # ./osdev.sh "destroy --yes-i-really-really-mean-it"
- 檢視部署的服務概況:
$ openstack service list +----------------------------------+-------------+----------------+ | ID| Name| Type| +----------------------------------+-------------+----------------+ | 304c9c5073f14f4a97ca1c3cf5e1b49e | neutron| network| | 46de4440a5cf4a5697fa94b2d0424ba9 | heat| orchestration| | 60b46b491ce7403aaec0c064384dde49 | heat-cfn| cloudformation | | 7726ab5d41c5450d954f073f1a9aff28 | cinderv2| volumev2| | 7a4bd5fc12904cc7b5c3810412f98c57 | gnocchi| metric| | 7ae6f98018fb4d509e862e45ebf10145 | glance| image| | a0ec333149284c09ac0e157753205fd6 | nova| compute| | b15e90c382864723945b15c37d3317a6 | placement| placement| | b5eaa49c50d64316b583eb1c0c4f9ce2 | cinderv3| volumev3| | c6474640f5d9424da0ec51c70c1e6e01 | nova_legacy | compute_legacy | | db27eb8524be4db3be12b9dd0dab16b8 | keystone| identity| | edf5c8b894a74a69b65bb49d8e014fff | cinder| volume| +----------------------------------+-------------+----------------+ $ openstack volume service list +------------------+-------------------+------+---------+-------+----------------------------+ | Binary| Host| Zone | Status| State | Updated At| +------------------+-------------------+------+---------+-------+----------------------------+ | cinder-scheduler | osdev02| nova | enabled | up| 2018-08-27T11:33:27.000000 | | cinder-volume| rbd:volumes@rbd-1 | nova | enabled | up| 2018-08-27T11:33:18.000000 | | cinder-backup| osdev02| nova | enabled | up| 2018-08-27T11:33:17.000000 | +------------------+-------------------+------+---------+-------+----------------------------+
初始化環境
-
檢視初始的
RBD
儲存池情況,全部是空的:
$ rbd -p images ls $ rbd -p volumes ls $ rbd -p vms ls
-
設定環境變數,並初始化
OpenStack
環境:
$ . ${KOLLA_ROOT}/myconfig/admin-openrc.sh $ ${KOLLA_ROOT}/myconfig/init-runonce
- 檢視新增的映象資訊:
$ openstack image list +--------------------------------------+--------+--------+ | ID| Name| Status | +--------------------------------------+--------+--------+ | 293b25bb-30be-4839-b4e2-1dba3c43a56a | cirros | active | +--------------------------------------+--------+--------+ $ openstack image show 293b25bb-30be-4839-b4e2-1dba3c43a56a +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field| Value| +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | checksum| 443b7623e27ecf03dc9e01ee93f67afe| | container_format | bare| | created_at| 2018-08-27T11:25:29Z| | disk_format| qcow2| | file| /v2/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/file| | id| 293b25bb-30be-4839-b4e2-1dba3c43a56a| | min_disk| 0| | min_ram| 0| | name| cirros| | owner| 68ada1726a864e2081a56be0a2dca3a0| | properties| locations='[{u'url': u'rbd://383237bd-becf-49d5-9bd6-deb0bc35ab2a/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/snap', u'metadata': {}}]', os_type='linux' | | protected| False| | schema| /v2/schemas/image| | size| 12716032| | status| active| | tags|| | updated_at| 2018-08-27T11:25:30Z| | virtual_size| None| | visibility| public| +------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
-
檢視
RBD
儲存池的變化,可見映象被儲存在images
儲存池中,並且有一個快照:
$ rbd -p images ls 293b25bb-30be-4839-b4e2-1dba3c43a56a $ rbd -p volumes ls $ rbd -p vms ls $ rbd -p images info 293b25bb-30be-4839-b4e2-1dba3c43a56a rbd image '293b25bb-30be-4839-b4e2-1dba3c43a56a': size 12 MiB in 2 objects order 23 (8 MiB objects) id: 178f4008d95 block_name_prefix: rbd_data.178f4008d95 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Mon Aug 27 19:25:29 2018 $ rbd -p images snap list 293b25bb-30be-4839-b4e2-1dba3c43a56a SNAPID NAMESIZE TIMESTAMP 6 snap 12 MiB Mon Aug 27 19:25:30 2018
建立虛擬機器
- 建立一個虛擬機器:
$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 demo1 +-------------------------------------+-----------------------------------------------+ | Field| Value| +-------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig| MANUAL| | OS-EXT-AZ:availability_zone|| | OS-EXT-SRV-ATTR:host| None| | OS-EXT-SRV-ATTR:hypervisor_hostname | None| | OS-EXT-SRV-ATTR:instance_name|| | OS-EXT-STS:power_state| NOSTATE| | OS-EXT-STS:task_state| scheduling| | OS-EXT-STS:vm_state| building| | OS-SRV-USG:launched_at| None| | OS-SRV-USG:terminated_at| None| | accessIPv4|| | accessIPv6|| | addresses|| | adminPass| 65cVBJ7S6yaD| | config_drive|| | created| 2018-08-27T11:29:03Z| | flavor| m1.tiny (1)| | hostId|| | id| 309f1364-4d58-413d-a865-dfc37ff04308| | image| cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) | | key_name| mykey| | name| demo1| | progress| 0| | project_id| 68ada1726a864e2081a56be0a2dca3a0| | properties|| | security_groups| name='default'| | status| BUILD| | updated| 2018-08-27T11:29:03Z| | user_id| c7111728fbbd4fd79bdd2b60e7d7cb42| | volumes_attached|| +-------------------------------------+-----------------------------------------------+ $ openstack server show 309f1364-4d58-413d-a865-dfc37ff04308 +-------------------------------------+----------------------------------------------------------+ | Field| Value| +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig| MANUAL| | OS-EXT-AZ:availability_zone| nova| | OS-EXT-SRV-ATTR:host| osdev03| | OS-EXT-SRV-ATTR:hypervisor_hostname | osdev03| | OS-EXT-SRV-ATTR:instance_name| instance-00000001| | OS-EXT-STS:power_state| Running| | OS-EXT-STS:task_state| None| | OS-EXT-STS:vm_state| active| | OS-SRV-USG:launched_at| 2018-08-27T11:29:16.000000| | OS-SRV-USG:terminated_at| None| | accessIPv4|| | accessIPv6|| | addresses| demo-net=10.0.0.11| | config_drive|| | created| 2018-08-27T11:29:03Z| | flavor| m1.tiny (1)| | hostId| 4e345dd9f770f63f80d3eafe97c20d97746e890b2971a8398e26db86 | | id| 309f1364-4d58-413d-a865-dfc37ff04308| | image| cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a)| | key_name| mykey| | name| demo1| | progress| 0| | project_id| 68ada1726a864e2081a56be0a2dca3a0| | properties|| | security_groups| name='default'| | status| ACTIVE| | updated| 2018-08-27T11:29:16Z| | user_id| c7111728fbbd4fd79bdd2b60e7d7cb42| | volumes_attached|| +-------------------------------------+----------------------------------------------------------+
-
可見虛擬機器在
vms
儲存池中建立了一個卷:
$ rbd -p images ls 293b25bb-30be-4839-b4e2-1dba3c43a56a $ rbd -p volumes ls $ rbd -p backups ls $ rbd -p vms ls 309f1364-4d58-413d-a865-dfc37ff04308_disk
-
登入虛擬機器所在節點,可以看到虛擬機器的系統卷使用的是在
vms
中建立的這個卷,從程序引數可以看出qemu
直接使用的是Ceph
的librbd
庫訪問的RBD
塊裝置:
$ ssh osdev@osdev03 $ sudo docker exec -it nova_libvirt virsh list IdNameState ---------------------------------------------------- 1instance-00000001running $ sudo docker exec -it nova_libvirt virsh dumpxml 1 ... <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='nova'> <secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/> </auth> <source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'> <host name='172.29.101.166' port='6789'/> <host name='172.29.101.167' port='6789'/> <host name='172.29.101.168' port='6789'/> </source> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> ... $ ps -aux | grep qemu 4243626789094.60.0 1341144 171404 ?Sl19:290:08 /usr/libexec/qemu-kvm -name guest=instance-00000001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-instance-00000001/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,avx512f=on,avx512dq=on,clflushopt=on,clwb=on,avx512cd=on,avx512bw=on,avx512vl=on,pku=on,stibp=on,pdpe1gb=on -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 309f1364-4d58-413d-a865-dfc37ff04308 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.2,serial=74bf926c-70b7-03df-b211-d21d6016081a,uuid=309f1364-4d58-413d-a865-dfc37ff04308,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object secret,id=virtio-disk0-secret0,data=zNy84nlNYigA4vjbuOxcGQa1/hh8w28i/WoJbO1Xsl4=,keyid=masterKey0,iv=OhX+FApyFyq2XLWq0ff/Ew==,format=base64 -drive file=rbd:vms/309f1364-4d58-413d-a865-dfc37ff04308_disk:id=nova:auth_supported=cephx\;none:mon_host=172.29.101.166\:6789\;172.29.101.167\:6789\;172.29.101.168\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=79,id=hostnet0,vhost=on,vhostfd=80 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:e8:e9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0,logfile=/var/lib/nova/instances/309f1364-4d58-413d-a865-dfc37ff04308/console.log,logappend=off -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.29.101.168:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on $ ldd /usr/libexec/qemu-kvm | grep -e ceph -e rbd librbd.so.1 => /lib64/librbd.so.1 (0x00007fde38815000) libceph-common.so.0 => /usr/lib64/ceph/libceph-common.so.0 (0x00007fde28247000)
建立卷
- 建立一個卷:
$ openstack volume create --size 1 volume1 +---------------------+--------------------------------------+ | Field| Value| +---------------------+--------------------------------------+ | attachments| []| | availability_zone| nova| | bootable| false| | consistencygroup_id | None| | created_at| 2018-08-27T11:33:52.000000| | description| None| | encrypted| False| | id| 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 | | migration_status| None| | multiattach| False| | name| volume1| | properties|| | replication_status| None| | size| 1| | snapshot_id| None| | source_volid| None| | status| creating| | type| None| | updated_at| None| | user_id| c7111728fbbd4fd79bdd2b60e7d7cb42| +---------------------+--------------------------------------+
-
檢視儲存池狀態,可以看到新建的卷被放在
volumes
儲存池:
$ rbd -p images ls 293b25bb-30be-4839-b4e2-1dba3c43a56a $ rbd -p volumes ls volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 $ rbd -p backups ls $ rbd -p vms ls 309f1364-4d58-413d-a865-dfc37ff04308_disk
建立備份
-
建立一個卷備份,可以看到是建立在
backups
儲存池中:
$ openstack volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 +-------+--------------------------------------+ | Field | Value| +-------+--------------------------------------+ | id| f2321578-88d5-4337-b93c-798855b817ce | | name| None| +-------+--------------------------------------+ $ openstack volume backup list +--------------------------------------+------+-------------+-----------+------+ | ID| Name | Description | Status| Size | +--------------------------------------+------+-------------+-----------+------+ | f2321578-88d5-4337-b93c-798855b817ce | None | None| available |1 | +--------------------------------------+------+-------------+-----------+------+ $ openstack volume backup show f2321578-88d5-4337-b93c-798855b817ce +-----------------------+--------------------------------------+ | Field| Value| +-----------------------+--------------------------------------+ | availability_zone| nova| | container| backups| | created_at| 2018-08-27T11:39:40.000000| | data_timestamp| 2018-08-27T11:39:40.000000| | description| None| | fail_reason| None| | has_dependent_backups | False| | id| f2321578-88d5-4337-b93c-798855b817ce | | is_incremental| False| | name| None| | object_count| 0| | size| 1| | snapshot_id| None| | status| available| | updated_at| 2018-08-27T11:39:46.000000| | volume_id| 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 | +-----------------------+--------------------------------------+ $ rbd -p backups ls volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
-
在此建立一個備份,發現
backups
儲存池並無變化,僅僅是在原有的備份卷中增加一個快照:
$ volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 +-------+--------------------------------------+ | Field | Value| +-------+--------------------------------------+ | id| 07132063-9bdb-4391-addd-a791dae2cfea | | name| None| +-------+--------------------------------------+ $ rbd -p backups ls volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base $ rbd -p backups snap list volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base SNAPID NAMESIZE TIMESTAMP 4 backup.f2321578-88d5-4337-b93c-798855b817ce.snap.1535369984.08 1 GiB Mon Aug 27 19:39:46 2018 5 backup.07132063-9bdb-4391-addd-a791dae2cfea.snap.1535370126.76 1 GiB Mon Aug 27 19:42:08 2018
連線卷
- 把新增的卷連結到之前建立的虛擬機器中:
$ openstack server add volume demo1 volume1 $ openstack volume show volume1 +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field| Value| +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments| [{u'server_id': u'309f1364-4d58-413d-a865-dfc37ff04308', u'attachment_id': u'fb4d9ec0-8a33-4ed0-8845-09e6f17aac81', u'attached_at': u'2018-08-27T11:44:51.000000', u'host_name': u'osdev03', u'volume_id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9', u'device': u'/dev/vdb', u'id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9'}] | | availability_zone| nova| | bootable| false| | consistencygroup_id| None| | created_at| 2018-08-27T11:33:52.000000| | description| None| | encrypted| False| | id| 3ccca300-bee3-4b5a-b89b-32e6b8b806d9| | migration_status| None| | multiattach| False| | name| volume1| | os-vol-host-attr:host| rbd:volumes@rbd-1#rbd-1| | os-vol-mig-status-attr:migstat | None| | os-vol-mig-status-attr:name_id | None| | os-vol-tenant-attr:tenant_id| 68ada1726a864e2081a56be0a2dca3a0| | properties| attached_mode='rw'| | replication_status| None| | size| 1| | snapshot_id| None| | source_volid| None| | status| in-use| | type| None| | updated_at| 2018-08-27T11:44:52.000000| | user_id| c7111728fbbd4fd79bdd2b60e7d7cb42| +--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-
到虛擬機器所在節點檢視其
libvirt
上引數的變化,發現新增了一個RBD
磁碟:
$ sudo docker exec -it nova_libvirt virsh dumpxml 1 ... <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none'/> <auth username='nova'> <secret type='ceph' uuid='2ea5db42-c8f1-4601-927c-3c64426907aa'/> </auth> <source protocol='rbd' name='vms/309f1364-4d58-413d-a865-dfc37ff04308_disk'> <host name='172.29.101.166' port='6789'/> <host name='172.29.101.167' port='6789'/> <host name='172.29.101.168' port='6789'/> </source> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='network' device='disk'> <driver name='qemu' type='raw' cache='none' discard='unmap'/> <auth username='cinder-volume'> <secret type='ceph' uuid='3fa55f7c-b556-4095-9253-b908d5408ec8'/> </auth> <source protocol='rbd' name='volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9'> <host name='172.29.101.166' port='6789'/> <host name='172.29.101.167' port='6789'/> <host name='172.29.101.168' port='6789'/> </source> <target dev='vdb' bus='virtio'/> <serial>3ccca300-bee3-4b5a-b89b-32e6b8b806d9</serial> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> ...
-
為虛擬機器建立一個浮動
IP
,使用SSH
登陸進去:
$ openstack console url show demo1 +-------+-------------------------------------------------------------------------------------+ | Field | Value| +-------+-------------------------------------------------------------------------------------+ | type| novnc| | url| http://172.29.101.167:6080/vnc_auto.html?token=9f835216-1c53-41ae-849a-44a85429a334 | +-------+-------------------------------------------------------------------------------------+ $ openstack floating ip create public1 +---------------------+--------------------------------------+ | Field| Value| +---------------------+--------------------------------------+ | created_at| 2018-08-27T11:49:02Z| | description|| | fixed_ip_address| None| | floating_ip_address | 192.168.162.52| | floating_network_id | ff69b3ff-c2c4-4474-a7ba-952fa99df919 | | id| 2aa86075-9c62-49f5-84ac-e7b6353c9591 | | name| 192.168.162.52| | port_id| None| | project_id| 68ada1726a864e2081a56be0a2dca3a0| | qos_policy_id| None| | revision_number| 0| | router_id| None| | status| DOWN| | subnet_id| None| | tags| []| | updated_at| 2018-08-27T11:49:02Z| +---------------------+--------------------------------------+ $ openstack server add floating ip demo1 192.168.162.52 $ openstack server list +--------------------------------------+-------+--------+------------------------------------+--------+---------+ | ID| Name| Status | Networks| Image| Flavor| +--------------------------------------+-------+--------+------------------------------------+--------+---------+ | 309f1364-4d58-413d-a865-dfc37ff04308 | demo1 | ACTIVE | demo-net=10.0.0.11, 192.168.162.52 | cirros | m1.tiny | +--------------------------------------+-------+--------+------------------------------------+--------+---------+ $ ssh root@osdev02 $ ip netns qrouter-65759e60-6e20-41cc-a79c-fc492232b127 (id: 1) qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 (id: 0) $ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ping 192.168.162.50 $ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ping 10.0.0.9 (使用者名稱"cirros",密碼"gocubsgo") $ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ssh [email protected] $ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ssh [email protected] $ sudo passwd root Changing password for root New password: Bad password: too weak Retype password: Password for root changed by root $ su - Password:
- 建立分割槽並寫入測試檔案,最後解除安裝分割槽:
# lsblk NAMEMAJ:MIN RMSIZE RO TYPE MOUNTPOINT vda253:001G0 disk |-vda1253:10 1015M0 part / `-vda15 253:1508M0 part vdb253:1601G0 disk # mkfs.ext4 /dev/vdb mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 262144 4k blocks and 65536 inodes Filesystem UUID: ede8d366-bfbc-4b9a-9d3f-306104f410d7 Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done # mount /dev/vdb /mnt # df -h FilesystemSizeUsed Available Use% Mounted on /dev240.1M0240.1M0% /dev /dev/vda1978.9M23.9M914.1M3% / tmpfs244.2M0244.2M0% /dev/shm tmpfs244.2M92.0K244.1M0% /run /dev/vdb975.9M1.3M907.4M0% /mnt # echo "hello openstack, volume test." > /mnt/ceph_rbd_test # umount /mnt # df -h FilesystemSizeUsed Available Use% Mounted on /dev240.1M0240.1M0% /dev /dev/vda1978.9M23.9M914.1M3% / tmpfs244.2M0244.2M0% /dev/shm tmpfs244.2M92.0K244.1M0% /run
斷開卷
- 斷開卷,同時檢視虛擬機器內部變化:
$ openstack server remove volume demo1 volume1 # lsblk NAMEMAJ:MIN RMSIZE RO TYPE MOUNTPOINT vda253:001G0 disk |-vda1253:10 1015M0 part / `-vda15 253:1508M0 part
-
在宿主機對映和掛載
RBD
卷,並檢視之前虛擬機器內部建立的檔案,完全相同:
$ rbd showmapped id pool imagesnap device 0rbdrbd_test -/dev/rbd0 $ rbd feature disable volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 object-map fast-diff deep-flatten $ rbd map volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 /dev/rbd1 $ mkdir /mnt/volume1 $ mount /dev/rbd1 /mnt/volume1/ $ cat /mnt/volume1/ ceph_rbd_testlost+found/ $ cat /mnt/volume1/ceph_rbd_test hello openstack, volume test.
參考文件
- ofollow,noindex" target="_blank">External Ceph