` openstack(二) | 听云轩

openstack(二)

安装(queen版,下一篇讲用kolla-ansible来安装)

环境

1
2
3
192.168.40.150 computer01    centos7   4.4.197
192.168.40.151 controller
192.168.40.152 cinder

1、配置主机名的映射(在/etc/hosts文件中添加相应的主机和ip映射)

2、配置时间同步,控制节点为ntp server

1
2
3
4
5
6
7
8
9
10
11
所有节点:yum install chrony -y
controller:
vim /etc/chrony.conf
allow 192.168.0.0/16
systemctl restart chronyd

Computer01和cinder:
vim /etc/chrony.conf
server controller iburst

注释其他几行

测试:
在controller节点上:

88trsP.png

其他节点上:

88tHdU.png

3、安装OpenStack包(所有节点)

1
2
3
4
yum install centos-release-openstack-queens -y
yum upgrade
yum install python-openstackclient -y
yum install openstack-selinux -y

4、安装数据库(controller)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
yum install mariadb mariadb-server python2-PyMySQL -y

vim /etc/my.cnf.d/mariadb-server.cnf

datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
default-storage-engine=Innodb
bind-address=192.168.40.151 //controller的IP
innodb_file_per_table=on
max_connections=4096
init-connect='SET NAMES utf8'
character-set-server=utf8

mysql_secure_installation

5、安装rabbitMQ(controller)

1
2
3
4
5
6
7
yum install rabbitmq-server -y
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

添加用户并设置权限:
rabbitmqctl add_user openstack 12345
rabbitmqctl set_permissions -p / openstack ".*" ".*" ".*"

6、安装数据库缓存(controller)

1
2
yum install memcached python-memcached -y
编辑/etc/sysconfig/memcached

88dPoT.png

注:(options,后面不加controller)

1
2
systemctl enable memcached.service
systemctl start memcached.service

7、安装etcd(controller)

88BY6J.png

8、安装keystone(controller)

  • 创建keystone数据库并授权

88DsU0.png

  • 安装、配置
1
yum install openstack-keystone httpd mod_wsgi -y

编辑 /etc/keystone/keystone.conf 文件:
88yLxU.png

  • 同步keystone数据库
1
/bin/sh -c "keystone-manage db_sync" keystone
  • 数据库初始化
1
2
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  • 引导身份认证
1
keystone-manage bootstrap --bootstrap-password 123456 --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

9、配置http服务

  • 编辑 /etc/httpd/conf/httpd.conf 文件,配置 ServerName 参数
1
ServerName controller
  • /usr/share/keystone/wsgi-keystone.conf 链接文件
1
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
  • 启动(如果报错,可关闭selinux)
1
2
systemctl enable httpd.service
systemctl start httpd.service
  • 配置administrator账号
1
2
3
4
5
6
7
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

10、创建domain,projects,users,roles

  • 创建域:
1
openstack domain create --description "Domain" example

886OYt.png

  • 创建服务项目
1
openstack project create --domain default   --description "Service Project" service

88cUne.png

  • 创建平台demo项目
1
openstack project create --domain default --description "Demo Project" demo

88gVUA.png

  • 创建demo用户
1
openstack user create --domain default  --password-prompt demo

882Yee.png

  • 创建用户角色
1
openstack role create user

882Xf1.png

  • 添加用户角色到demo项目和用户
1
openstack role add --project demo --user demo user

11、检查配置

  • 取消环境变量
1
unset OS_AUTH_URL OS_PASSWORD
  • 使用admin申请token
1
openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue

8GUMLt.png

  • 使用demo申请token
1
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue

8GUWO1.png

  • 通过脚本来生成token,为了提高可操作性和工作效率,可以创建一个统一而完整的openRC文件,其包括通用变量和特殊变量

8GU7fe.png

1
2
source admin-openrc
openstack token issue

12、安装glance服务(controller)

  • 创建glance数据库,并授权

8GakXn.png

  • 获取admin用户的环境变量,并创建服务认证
1
2
source admin-openrc
openstack user create --domain default --password-prompt glance

8GaQ1J.png

  • 将admin添加到glance的项目和用户中
1
openstack role add --project service --user glance admin
  • 创建glance项目
1
openstack service create --name glance  --description "OpenStack Image" image
  • 创建glance镜像服务的API端点
1
openstack endpoint create --region RegionOne image public http://controller:9292

8GaG0x.png

1
openstack endpoint create --region RegionOne image internal http://controller:9292

8Gaw1H.png

1
openstack endpoint create --region RegionOne image admin http://controller:9292

8GaoBq.png

13、安装和配置glance组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
yum install openstack-glance -y
编辑 /etc/glance/glance-api.conf 文件:
[database] //配置数据库连接
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken] //和[paste_deploy]一起,配置keystone身份认证服务组件访问
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

[paste_deploy]
flavor = keystone

[glance_store] //配置虚拟机的存储路径和存储方式
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

编辑 /etc/glance/glance-registry.conf 文件:
[database]
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

[paste_deploy]
flavor = keystone

同步镜像服务数据库:
/bin/sh -c "glance-manage db_sync" glance

启动:
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

14、验证:

  • 获取admin用户的环境变量,且下载镜像:
1
2
3
4
5
source admin-openrc
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

上传镜像(如果没有9292端口报错,可能是日志权限问题):
openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

8GwNSe.png

查看:

8YhudK.png

15、安装配置computer(controller)

  • 创建nova_api,nova,nova_cell0数据库

8YhlJe.png

  • 创建nova用户
1
2
source admin-openrc
openstack user create --domain default --password-prompt nova

8Yh0Jg.png

  • 添加admin角色赋给项目和用户
1
openstack role add --project service --user nova admin
  • 创建nova计算服务
1
openstack service create --name nova --description "OpenStack Compute" compute

8YhhYF.png

  • 创建API服务端点
1
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

8Y4EtS.png

1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

8Y4DAK.png

1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

8Y4o4S.png

  • 创建placement服务用户

8Y4H3Q.png

1
openstack service create --name placement --description "Placement API" placement

8Y59CF.png

1
2
3
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

16、安装和配置nova组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

编辑 /etc/nova/nova.conf 文件:
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.40.151
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
connection = mysql+pymysql://nova:123456@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456


由于软件包的一个bug,需要在/etc/httpd/conf.d/00-nova-placement-api.conf 文件中添加如下配置:
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>

重启httpd

同步nova-api数据库:
/bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库:
/bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1:
/bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

同步nova数据库:
/bin/sh -c "nova-manage db sync" nova

验证:
8Y5MgH.png

重启:

1
2
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

17、安装配置compute(computer节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
yum install openstack-nova-compute
编辑 /etc/nova/nova.conf 文件:
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.40.150 //计算节点的网络管理IP地址
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver


[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456

[libvirt]
virt_type=kvm

启动:
systemctl enable openstack-nova-compute
systemctl enable libvirtd.service
systemctl start libvirtd.service
systemctl restart openstack-nova-compute.service
  • 确认nova计算服务组件已经成功运行和注册:
1
2
3
controller:
source admin-openrc
openstack compute service list --service nova-compute

8YIAzQ.png

  • 发现计算节点:
1
/bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

8YIKoV.png

18、在controller上验证计算服务操作

  • 列出服务组件

8YIUdx.png

  • 列出身份服务中的API端点与身份服务的连接

8YIgeI.png

  • 列出镜像

8YITyj.png

  • 检查cells和placement API

8YIvfU.png

19、安装和配置网络组件(controller)

  • 创建数据库以及授权

8YoGh8.png

  • 创建用户以及服务
1
openstack user create --domain default --password-prompt neutron

8YorNV.png

  • 将admin赋给neutron和service
1
openstack role add --project service --user neutron admin
  • 创建service entity
1
openstack service create --name neutron --description "OpenStack Networking" network

8Yo2jJ.png

  • 创建网络服务API 端点
1
openstack endpoint create --region RegionOne network public http://controller:9696

8YohH1.png

1
openstack endpoint create --region RegionOne network internal http://controller:9696

8YoH3D.png

1
openstack endpoint create --region RegionOne network admin http://controller:9696

8YTADs.png

20、配置网络部分(controller)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
配置服务组件,编辑 /etc/neutron/neutron.conf 文件:
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@controller
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:123456@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456:

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

21、配置网络二层插件(controller)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件:
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[securitygroup]
enable_ipset = true

配置 Linux 网桥,编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件:
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

8YTcIP.png

将其设置为1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
配置 DHCP 服务编辑 /etc/neutron/dhcp_agent.ini 文件:
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

配置 metadata,编辑 /etc/neutron/metadata_agent.ini 文件:
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456

配置计算服务使用网络服务,编辑 /etc/nova/nova.conf 文件:
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

创建服务软链接:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库:
/bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启 compute API 服务:
systemctl restart openstack-nova-api.service

配置网络服务开机启动:
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

22、配置compute节点网络服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
yum install openstack-neutron-linuxbridge ebtables ipset

配置公共组件,编辑 /etc/neutron/neutron.conf 文件:
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:123456@controller

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

配置 Linux 网桥,编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件:
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置计算节点网络服务,编辑 /etc/nova/nova.conf 文件:
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

重启 compute 服务:
systemctl restart openstack-nova-compute.service

设置网桥服务开机启动:
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

23、安装horizon组件(controller)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
yum install openstack-dashboard -y

编辑 /etc/openstack-dashboard/local_settings 文件:
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']

###配置 memcache 会话存储###
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

###开启身份认证 API 版本 v3###
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

###开启 domains 版本支持###
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

###配置 API 版本####
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_***': False,
'enable_fip_topology_check': False,
#修改时,注意底部采用原有的“}”,重复会无法重启 web 服务

TIME_ZONE = "Asia/Shanghai"
重启 web 服务和会话存储:
systemctl restart httpd.service memcached.service

8Y7Uwn.png

如果报错:

8Y7ISO.png

8YHp6g.png

在WSGISocketPrefix run/wsgi 下添加:
**WSGIApplicationGroup %{GLOBAL}**然后重启

8YbS4x.png

24、安装、配置cinder

  • 在controller节点上,配置数据库,创建cinder数据库、和前面一样,创建授权用户
  • 加载admin-user环境变量,并创建identity服务凭据(123456):
1
2
source admin-openrc 
openstack user create --domain default --password-prompt cinder

G6zEJP.png

1
2
3
4
将admin role赋予cinder用户和service project
openstack role add --project service --user cinder admin
创建cinder和cinder v2服务实体(最好是创建v2 v3)
openstack service create --name cinder --description "OpenStack block storage" volume

G6z3iq.png

1
openstack service create --name cinderv2 --description "OpenStack block storage" volumev2

G6zcQO.png

1
2
创建API endpoint
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s

G6zhTA.png

1
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s

G6zqOg.png

1
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s

Gc1Wng.png

1
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s

Gc8YLD.png

1
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s

Gc8vf1.png

1
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

GcGEtA.png

安装和配置cinder组件

1
yum install -y openstack-cinder

编辑/etc/cinder.conf文件:
配置数据库连接:

GcYrS1.png

配置rabbitmq:
GcYolt.png
GcYbm8.png
配置认证服务:
GcYXkQ.png
Gcti0U.png
配置节点管理IP地址:
Gct7C9.png
配置锁路径:
Gctqjx.png
将cinder服务信息同步到数据库(忽略输出中不推荐的信息):

1
/bin/sh -c "cinder-manage db sync" cinder

编辑nova.conf:
GcN9Cd.png
服务启动:

1
2
3
systemctl restart openstack-nova-api
systemctl enable openstack-cinder-api openstack-cinder-scheduler
systemctl restart openstack-cinder-api openstack-cinder-scheduler

25、安装和配置存储节点(cinder)

  • 安装并启动lvm2
  • 创建逻辑卷和卷组
1
2
3
4
5
6
7
8
9
10
11
12
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
```
cinder块存储卷一般只能被虚拟机实例访问使用,但是存储节点操作系统可以管理包括磁盘在内的本地硬件设备,操作系统中的LVM卷扫描工具可以扫描/dev目录下的所有设备,如果项目在他们的卷上使用LVM,扫描工具检测到这些卷时会尝试缓存它们,可能会在底层操作系统和项目卷上产生各种问题。您必须重新配置LVM,让它只扫描包含``cinder-volume``卷组的设备。

编辑``/etc/lvm/lvm.conf``文件并完成下面的操作:

![GcNM2n.png](https://s1.ax1x.com/2020/04/07/GcNM2n.png)

每个过滤器组中的元素都以``a``开头,即为 accept,或以 r 开头,即为**reject**,并且包括一个设备名称的正则表达式规则。过滤器组必须以``r/.*/``结束,过滤所有保留设备。您可以使用 :命令:`vgs -vvvv` 来测试过滤器。

* 安装组件

yum install openstack-cinder targetcli python-keystone -y

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
* 编辑配置文件:
配置数据库:
![GcNwx1.png](https://s1.ax1x.com/2020/04/07/GcNwx1.png)
配置MQ:
![GcNfxI.png](https://s1.ax1x.com/2020/04/07/GcNfxI.png)
[![GcNORs.png](https://s1.ax1x.com/2020/04/07/GcNORs.png)](https://imgchr.com/i/GcNORs)
配置认证:
![GcUFJJ.png](https://s1.ax1x.com/2020/04/07/GcUFJJ.png)
![GcUVQ1.png](https://s1.ax1x.com/2020/04/07/GcUVQ1.png)
配置my_ip:
![GcU4fJ.png](https://s1.ax1x.com/2020/04/07/GcU4fJ.png)
配置lvm:
![GcaMn0.png](https://s1.ax1x.com/2020/04/07/GcaMn0.png)
![GcaQBV.png](https://s1.ax1x.com/2020/04/07/GcaQBV.png)
配置镜像位置:
![GcaGh4.png](https://s1.ax1x.com/2020/04/07/GcaGh4.png)
配置锁路径:
![Gcd9C4.png](https://s1.ax1x.com/2020/04/07/Gcd9C4.png)
* 配置启动:

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

1
2
3
4
5
6
7
* 验证:
source admin-openrc
[![GcdeUO.png](https://s1.ax1x.com/2020/04/07/GcdeUO.png)](https://imgchr.com/i/GcdeUO)
如果报错:
ERROR: publicURL endpoint for volumev3 service not found

在admin-openrc中加入:

export OS_VOLUME_API_VERSION=2

1
加载demo脚本并创建一个1G的卷:

source demo-openrc
cinder create --name test1 1

![GcdRG4.png](https://s1.ax1x.com/2020/04/07/GcdRG4.png)
然后查看所创建的卷:
![GcdLJe.png](https://s1.ax1x.com/2020/04/07/GcdLJe.png)

------ 本文结束 ------
您的支持将鼓励我继续创作