[TOC]
各组件功能
OpenStack 通过 Nova 调用 KVM/XEN/VMWARE 等虚拟化技术创建虚拟机。 OpenStack 是一个管理平台宽假,支持众多的虚拟化管理,cinder 存储支持 GlusterFS、ISCSI、MFS 等存储技术。
服务名称 | 项目名称 | 详细说明 |
---|---|---|
dashboard | Horizon | 基于OpenStack API 接口使用 django 开发的 web 管理服务 |
compute | Nova | 通过虚拟化技术提供虚拟机计算资源池 |
networking | Neutron | 实现了虚拟机的网络资源管理,即虚拟机网络 |
object storage | Swift | 对象存储,适用于一次写入多次读取。如:图片、ISO镜像 |
block storage | Cinder | 块存储,提供存储资源池,保存虚拟机的磁盘镜像等信息 |
identity service | Keystone | 提供账户登录安全认证 |
image service | Glance | 提供虚拟镜像的注册和存储管理 |
telemetry | Ceilometer | 提供监控和数据采集,计量服务 |
orchestra | Heat | 自动化组件的部署 |
database service | Trove | 提供数据库应用服务 |
版本说明
本手册以 ocata 版本为例。
Alpha
:内部测试版Dev
:在软件开发过程中开发软件的代号,相比于 beta 版,dev 版本可能出现的更早Beta
:测试版,这个阶段的版本一般会加入新的功能。RC
(Release Candidate):发行候选版本,RC版不会再加入新功能,主要着重于除错。GA
(General Availablity):正式发布的版本。
安装准备
1. 查看 OpenStack yum 版本
yum list centos-release-openstack*
2. 安装 yum 源(负载服务、数据库、memcache、rabbitMQ服务器除外)
yum install -y centos-release-openstack-ocata.noarch
yum install -y https://rdoproject.org/repos/rdo-release.rpm
3. 各服务器安装 OpenStack 客户端、SElinux管理包
yum install -y python-openstackclient
yum install -y openstack-selinux
4. 安装数据库
openstack 各组件都要使用数据库保存数据,除了 nova 使用 API 与其他组件进行调用。
yum install -y mariadb python2-PyMySQL #用于控制端连接数据库
yum install -y mariadb-server #安装数据库
5. 配置数据库
## vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.10.204 #指定监听地址
default-storage-engine = innodb #默认引擎
innodb_file_per_table = on #开启每个表都有独立表空间
max_connections = 4096 #最大连接数
collation-server = utf8_general_ci #不区分大小写排序
character-set-server = utf #设置编码
配置 /etc/my.cnf
[mysqld]
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
datadir=/data/mysql
innodb_file_per_table=1
#skip-grant-tables
relay-log=/data/mysql
server-id=10
log-error=/data/mysql-log/mysql_error.txt
log-bin=/data/mysql-binlog/master-log
#general_log=ON
#general_log_file=/data/general_mysql.log
long_query_time=5
slow_query_log=1
slow_query_log_file=/data/mysql-log/slow_mysql.txt
max_connections=10000
bind-address=192.168.10.204
[client]
port=3306
socket=/var/lib/mysql/mysql.sock
[mysqld_safe]
log-error=/data/mysql-log/mysqld-safe.log
pid-file=/var/lib/mysql/mysql.sock
6. 创建数据目录并授权
mkdir -pv /data/{mysql,mysql-log,mysql-binlog}
chown -R mysql.mysql /data/
7. 启动 MariaDB ,并验证
8. 安装 keepalived
wget http://www.keepalived.org/software/keepalived-1.3.6.tar.gz
tar xf keepalived-1.3.6.tar.gz
cd keepalived-1.3.6
yum install libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs net-snmp-libs openssh-server openssh-clients openssl openssl-devel tree sudo psmisc lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute
./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install
cp /usr/loca/src/keepalived-1.3.6/keepalived/etc/init.d/keepalived.rh.init /etc/sysconfig/keepalived.sysconfig
cp /usr/local/src/keepalived-1.3.6/keepalived/keepalived.service /usr/lib/systemd/system/
cp /usr/local/src/keepalived-1.3.6/bin/keepalived /usr/sbin/
9. 准备 keepalived 配置文件
master服务器:vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 100
advert_int 1
unicast_src_ip 192.168.10.204
unicast_peer {
192.168.10.205
}
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.10.100/24 dev eth0 label eth0:0
}
}
backup服务器:vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 1
priority 50
advert_int 1
unicast_src_ip 192.168.10.205
unicast_peer {
192.168.10.204
}
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.10.100/24 dev eth0 label eth0:0
}
}
10. 启动并验证keepalived
systemctl enable keepalived
systemctl start keepalived
11. 安装 haproxy
wget http://www.haproxy.org/download/1.7/src/haproxy-1.7.9.tar.gz
tar xvf haproxy-1.7.9.tar.gz
cd haproxy-1.7.9
make TARGET=linux2628 USE=PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy
cp ./haproxy-systemd-wrapper /usr/sbin/haproxy-systemd-wrapper
cp ./haproxy /usr/sbin/haproxy
12. 准备haproxy启动脚本 vim /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
EnvironmentFile=/etc/sysconfig/haproxy
ExecStart=/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid $OPTIONS
ExecReload=/biiin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
13. 准备系统配置文件 vim /etc/sysconfig/haproxy
OPTIONS=""
14. 修改主配置文件 mkdir /var/lib/haproxy;mkdir /etc/haprxy;vim /etc/haproxy/haproxy.cfg
global
maxconn 100000
uid 99
daemon
nbproc 1
log 127.0.0.1 local0 info
chroot /usr/local/haproxy
stats socket /var/lib/haproxy/haproxy.socket mode 600 level admin
defaults
option redispatch #当 serverId 对应的服务器挂掉后,强制定向到其他健康的服务器
option abortonclose #当服务器负载很高的时候,自动结束掉当前队列处理比较久的链接
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 10s #连接到一台服务器的最长等待时间
timeout client 20s #连接客户端发送数据最长等待时间
timeout server 30s #服务器回应客户端发送数据最长等待时间
timeout check 5s #对后端服务器的检测超时时间
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth haadmin:33445566
frontend test
bind 192.168.10.100:80
mode http
default_backend test_http_nodes
backend test_http_nodes
mode http
balance source
server 127.0.0.1 127.0.0.1:80 check inter 2000 fall 3 rise 5
15. 各负载服务器配置内核参数 vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
16. 启用 haproxy
sysctl -p
systemctl start haproxy
systemctl enable haproxy
17. 安装 rabbitMQ
yum install -y rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
18. 添加 rabbitMQ 客户端用户并设置密码
rabbitmqctl add_user openstack 123456
19. 赋予 openstack 用户读写权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
20. 打开 rabbitMQ 的 web 插件
rabbitmq-plugins enable rabbitmq_management
rabbitmq-plugins list #查看插件
21. 安装 memcached
用于缓存 openstack 各服务的身份认证令牌信息。
yum install -y memcached
yum install -y python-memcached #openstack 安装 python 模块
22. 编辑memcached配置文件 vim /etc/sysconfig/memcached
PORT="11212" #避免和haproxy监听的11211冲突
USER="memcached"
MAXCONN="1024"
CACHESIZE="512"
OPTIONS="-l 192.168.10.205"
23. 启动memcached
systemctl enable memcached
systemctl start memcached
部署认证服务 keystone
keystone 主要涉及以下几个概念:
User
:使用 openstack 的用户。Tenant
:租户、用户组,在一个租户中可以有多个用户,这些用户可以根据权限的划分,使用租户中的资源。Role
:角色,用于分配操作的权限。角色可以被指定给用户,使得该用户获得角色对应的操作权限。Token
:一串比特值或字符串,用来作为访问资源的几号。Token中含有可访问资源的范围和有效时间。
1. keystone 数据库配置
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' identified by 'keystone';
FLUSH PRIVILEGES;
2. 配置 haproxy 代理
###openstack-mysql#######
frontend openstack_mysql
bind 192.168.10.100:3306
mode tcp
default_backend openstack_mysql_node
backend openstack_mysql_node
mode tcp
balance source
server 192.168.10.204 192.168.10.204:3306 check inter 2000 fall 3 rise 5
###openstack-memcached########
frontend openstack_memcached
bind 192.168.10.100:11211
mode tcp
default_backend openstack_memcached_node
backend openstack_memcached_node
mode tcp
balance source
server 192.168.10.100 192.168.10.100:11212 check inter 2000 fall 3 rise 5
3. 安装 keystone
yum install -y openstack-keystone httpd mod_wsgi python-memcached
## openstack-keystone 是 keystone 服务
## mod_wsgi 是 python 的通用网关
4. 编辑 keystone 配置文件
openssl rand -hex 10 ## 生成临时 token
vim /etc/keystone/kestone.conf
admin_token = xxxxxxxxxxxxx #大概17行,改为上面生成的临时token
connection = mysql+pymysql://keystone:[email protected]/keystone #大概714行
provider = fernet #大概2833行
5. 初始化并验证数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
#验证是否已经有表
USE keystone;
SHOW tables;
6. 初始化证书并验证
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
7. 添加 keystone 的web配置
#vim /etc/httpd/conf/httpd.conf
ServerName 192.168.10.201:80
#软链接配置文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
#启动apache
systemctl start httpd
systemctl enable httpd
8. 创建域、用户、项目和角色
- 通过admin的token设置环境变量
export OS_TOKEN=xxxxxxxxxxxxxxxxx export OS_URL=http://192.168.10.201:35357/v3 export OS_IDENTITY_API_VERSION=3
- 创建默认域 一定要先设置环境变量,否则提示未认证
openstack domain create --description "Default Domain" default
- 创建一个 admin 的项目
openstack project create --domain default --description "Admin Project" admin
- 创建 admin 用户,并设置密码为 admin
openstack user create --domain default --password-prompt admin
- 创建 admin 角色
一个项目里面可以有多个角色,目前角色只能创建在/etc/keystone/policy.json
文件中定义好的角色。openstack role create admin
- 给 admin 用户授权
将admin与用户授予admin项目的admin角色。openstack role add --project admin --user admin admin
9. 创建 demo 项目
该项目可用于演示或测试等。
openstack project create --domain default --description "Demo project" demo
openstack user create --domain default --password-prompt demo
openstack role create user
openstack role add --project demo --user demo user
10. 创建 service 项目
各服务之间与 keystone 进行访问和认证,service 用于给服务创建用户。
#创建service项目
openstack project create -domain default --description "Service Project" service
#创建glance用户
openstack user create --domain default --password-prompt glance
#对glance用户授权(添加到service项目,并授予admin角色)
openstack role add --project service --user glance admin
11. 创建 nova、neutron 用户
openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
12. 将 keystone 服务注册到 openstack
#创建一个keystone认证服务
openstack service list ## 查看当前的服务列表
openstack service create --name keystone --description "Openstack Identity" identity
#创建endpoint(如果出现错误,需要全部删除再重新注册。注册的IP地址写keepalived的VIP)
openstack endpoint create --region RegionOne identity public http://192.168.10.100:5000/v3 #公共端点
openstack endpoint create --region RegionOne identity internal http://192.168.10.100:5000/v3 #私有端点
openstack endpoint create --region RegionOne identity admin http://192.168.10.100:35357/v3 #管理端点
13. 配置haproxy,添加keystone代理 vim /etc/haproxy/haproxy.cfg
listen keystone-public-url
bind 192.168.10.100:5000
mode tcp
log global
balance source
server keystone1 192.168.10.201:5000 check inter 5000 rise 3 fall 3
listen keystone-admin-url
bind 192.168.10.100:35357
mode tcp
log global
balance source
server keystone1 192.168.10.201:35357 check inter 5000 rise 3 fall 3
14. 重启haproxy,并验证访问
systemctl restart haproxy
telnet 192.168.10.100 5000
15. 测试 keystone 是否可以做用户验证
验证 admin 用户,密码 admin
export OS_IDENTITY_API_VERSION=3
openstack --os-auth-url http://192.168.10.100:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
16. 设置环境变量的脚本
admin用户:vim admin-ocata.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.10.100:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Demo用户:vim demo-ocata.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.10.100:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
部署镜像服务 Glance
glance服务默认监听端口为9292
,需要先把镜像上传到 glance,查看、删除等操作都是通过 glance 进行管理。
glance有两个主要的服务:
glance-api
:接收镜像的删除、上传、读取等;glance-Registry
:负责与mysql交互,用于存储或获取镜像的元数据(metadata),默认监听端口为9191
。
glance数据库有两张表:
image
:存放镜像格式、大小等信息;image property
:存放镜像的定制化信息;
image store
是一个存储的接口层,通过这个接口 glance 可以获取镜像。支持的存储有 Amazon 的 S3、openstack 本身的 swift、还有 ceph、glusterFS 等分布式存储。
glance 不需要配置消息队列,但是需要配置数据库和keystone。
1. 安装 glance
yum install -y openstack-glance
2. 创建数据库用户
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' identified by 'glance';
FLUSH PRIVILEGES;
3. 编辑 glance-api 配置文件 grep -n "^[a-Z\[]" /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
[keystone_authtoken]
auth_url = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
4. 编辑 glance-registry 配置文件 vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
auth_url = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
5. 配置 haproxy 代理 glance vim /etc/haproxy/haproxy.cfg
listen glance-api
bind 192.168.10.100:9292
mode tcp
log global
balance source
server glance-api1 192.168.10.201:9292 check inter 5000 rise 3 fall 3
listen glance
bind 192.168.10.100:9191
mode tcp
log global
balance source
server glance1 192.168.10.201:9191 check inter 5000 rise 3 fall 3
6. 重启 haproxy
systemctl restart haproxy
7. 初始化 glance 数据库
su -s /bin/sh -c "glance-manage db_sync" glance
8. 启动 glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
9. 注册 glance 服务
#设置环境变量(脚本内容在上面)
source admin-ocata.sh
#创建glance服务
openstack service create --name glance --description "OpenStack Image" image
#创建共有endpoint
openstack endpoint create --region RegionOne image public http://192.168.10.100:9292
#创建私有endpoint
openstack endpoint create --region RegionOne image internal http://192.168.10.100:9292
#创建管理endp
openstack endpoint create --region RegionOne image admin http://192.168.10.100:9292
10. 验证glance服务
openstack endpoint list
11. 上传镜像,并验证
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
source admin-ocata.sh
openstack image create "cirros" --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
#查看是否有镜像
glance image-list
openstack image list
#查看指定镜像信息
openstack-image show cirros
部署 nova 控制节点
nova 是 openstack 最早的组件之一,分为控制节点和计算节点。
计算节点通过 nova computer 进行虚拟机创建,通过 libvirt 调用 kvm 创建虚拟机,nova 之间通信通过 rabbitMQ 队列,其组件和功能如下:
API
:负责接收和响应外部请求;Scheduler
:负责调度虚拟机所在的物理机;Conductor
:计算节点访问数据库的中间件;Consoleauth
:用于控制台的授权认证;Novncproxy
:VNC代理,用于显示虚拟机操作终端;
Nova-API的功能:
Nova-api 组件实现了 restful API 的功能,接收和响应来自最终用户的计算API请求,接收外部的请求,并通过 message queue 将请求发送给其他服务组件,同时也兼容 EC2 API,可以使用 EC2 的管理工具对 nova 进行日常管理。
nova scheduler的功能:
决策虚拟机创建在哪个主机(计算节点)上,分为两个步骤:
过滤(filter)
:首先获取主机列表,根据过滤属性,选择符合条件的主机;计算权值(weight)
:默认根据资源可用空间进行权重排序,然后选择权重大的主机;
1. 安装nova
yum install -y \
openstack-nova-api \
openstack-nova-conductor \
openstack-nova-console \
openstack-nova-novncproxy \
openstack-nova-scheduler \
openstack-nova-placement-api
2. 准备数据库
CREATE DATABASE nova_api;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
FLUSH PRIVILEGES;
3. 创建 nova 服务,并注册
openstack service create --name nova --description "OpenStack Compute" compute
#公共endpoint
openstack endpoint create --region RegionOne compute public http://192.168.10.100:8774/v2.1
#私有endpoint
openstack endpoint create --region RegionOne compute internal http://192.168.10.100:8774/v2.1
#管理endpoint
openstack endpoint create --region RegionOne compute admin http://192.168.10.100:8774/v2.1
4. 创建 placement 用户并授权
openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
5. 创建 placement API 服务并注册
openstack service create --name placement --description "Placement API" placement
#公共endpoint
openstack endpoint create --region RegionOne placement public http://192.168.10.100:8774/v2.1
#私有endpoint
openstack endpoint create --region RegionOne placement internal http://192.168.10.100:8774/v2.1
#管理endpoint
openstack endpoint create --region RegionOne placement admin http://192.168.10.100:8774/v2.1
6. 编辑配置文件 vim /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
rpc_backend=rabbit
[api]
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api
[database]
connection = mysql+pymysql://nova:[email protected]/nova
[glance]
api_servers=http://192.168.10.100:9292
[keystone_authtoken]
auth_uri = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.10.100:35357/v3
username = placement
password = placement
[vnc]
enabled=true
vncserver_listen=192.168.10.201
vncserver_proxyclient_address=192.168.10.201
7. 配置 apache 允许访问 placement API vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<Ifversion < 2.4>
order allow,deny
Allow from all
</Ifversion>
</Directory>
8. 重启httpd
systemctl restart httpd
9. 初始化数据库
#nova_api 数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
#nova 数据库
su -s /bin/sh -c "nova-manage db sync" nova
#nova cell0 数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#nova cell1 数据库
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
10. 验证 nova cell0 和 nova cell1 是否正常注册
nova-manage cell_v2 list_cells
11. 启动 nova
systemctl enable \
openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl start \
openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
12. 查看日志有没有报错,以及rabbitMQ是否有连接
13. 验证nova控制端
nova service-list
部署 nova 计算节点
在计算节点主机部署
1. 安装nova计算节点
yum install -y openstack-nova-compute
2. 修改配置文件 vim /etc/nova/nova.conf
[DEFAULT]
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
[api]
auth_strategy=keystone
[glance]
api_servers=http://192.168.10.100:9292
[keystone_authtoken]
auth_uri = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.10.100:35357/v3
username = placement
password = placement
[vnc]
enabled=true
vncserver_listen=192.168.10.201
vncserver_proxyclient_address=192.168.10.202
novncproxy_base_url=http://192.168.10.100:6080/vnc_auto.html
3. 确认主机是否支持硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
4. 启动nova计算服务
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
5. 添加计算节点到cell数据库
source admin-openstack.sh
openstack hypervisor list
6. 主动发现计算节点
- 命令,手动发现
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
- 配置文件,定期自动发现
vim /etc/nova/nova.conf
[scheduler] discover_hosts_in_cells_interval=300
- 重启nova服务:
bash nova-restart.sh
- 重启nova服务:
7. 验证计算节点
nova host-list
nova service-list
nova image-list
openstack image list
openstack compute service list
#列出组件是否成功注册
openstack compute service list
#检查cells 和 placement API 是否正常工作
nova-status upgrade check
#列出keystone中的端点,验证连通性
openstack catalog list
部署网络服务 neutron
OpenStack中物理网络连接架构:
- 管理网络(management network)
- 数据网络(data network)
- 外部网络(external network)
- API网络
两种网络类型:
- Tenant network:tenant内部使用的网络
Flat
:所有VMs在同一个网络中,不支持VLAN及其它网络隔离机制;Local
:所有的VMs位于本地Compute节点,且与external网络隔离;VLAN
:通过使用VLAN的IDs创建多个providers或tenant网络;VxLAN和GRE
:通过封装或隧道技术,实现多个网络间通信;
- provider network:不转属于某tenant,为各tenant提供通信承载的网络;
1. 准备数据库
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron123';
FLUSH PRIVILEGES;
2. 创建 neutron 服务,并注册
openstack service create --name neutron --description "OpenStack Networking" network
#公共
openstack endpoint create --region RegionOne network public http://192.168.10.100:9696
#私有
openstack endpoint create --region RegionOne networki nternal http://192.168.10.100:9696
#管理
openstack endpoint create --region RegionOne network admin http://192.168.10.100:9696
## 验证endpoint
openstack endpoint list
3. 配置haproxy负载 vim /etc/haproxy/haproxy.cfg
listen neutron
bind 192.168.10.100:9696
mode tcp
log global
balance source
server neutron-server 192.168.10.201:9696 check inter 5000 fall 3 rise 3
4. 重启 haproxy
systemctl restart haproxy
5. 安装 neutron
yum install -y \
openstack-neutron \
openstack-neutron-ml2 \
openstack-neutron-linuxbridge \
ebtables
6. 编辑 neutron 配置文件 vim /etc/neutron/neutron.conf
connection = mysql+pymysql://neutron:[email protected]/neutron
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
notify_nova_on_port_status_changes =true
notify_nova_on_port_data_changes = true
[nova]
auth_url = http://192.168.10.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
lock_path = /var/lib/neutron/tmp
7. 配置 ML2(Modular Layer 2)插件
ML2 插件使用 linuxbridge 机制来为实例创建 layer-2 虚拟网络基础设施。
vim /etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
flat_networks = internal
enable_ipset = true
8. 配置 linuxbridge 代理 vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = internal:eth0 #内部网络
enable_vxlan = false
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
#/usr/lib/python2.7/site-package/neutron/agent/linux/iptables_firewall.py
9. 配置 DHCP 代理 vim /etc/netron/dhcp_agent.ini
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
10. 配置元数据代理 vim /etc/neutron/metadata_agent.ini
nova_metadata_ip = 192.168.10.100
metadata_proxy_shared_secret = 1234567
11. 配置 nova 调用 neutron vim /etc/nova/nova.conf
[neutron]
url = http://192.168.10.100:9696
auth_url = http://192.168.10.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 1234567
12. 创建软链接
ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
13. 初始化数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
14. 重启 nova API
systemctl restart openstack-nova-api.service
15. 配置 haproxy 代理 vim /etc/haproxy/haproxy.cfg
listen nova-api
bind 192.168.10.100:8775
mode tcp
log global
balance source
server nova-server 192.168.10.210:8775 check inter 5000 rise 3 fall 3
16. 启动 neutron
systemctl enable \
neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl start \
neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
17. 验证 neutron 控制端是否注册成功 此步骤要求各服务器时间必须一致
neutron agent-list
部署 neutron 计算节点
1. 安装
yum install -y openstack-neutron-linuxbridge ebtables ipset
2. 编辑配置文件 vim /etc/neutron/neutron.conf
auth_strategy = keystone
transport_url = rabbit://openstack:[email protected]
[keystone_authtoken]
auth_uri = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
lock_path = /var/lib/neutron/tmp
3. 配置 linuxbridge 代理 vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = internal:eth0
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_vxlan = false
enable_security_group = true
4. 配置 nova 使用网络 vim /etc/nova/nova.conf
[neutron]
url = http://192.168.10.100:9696
auth_url = http://192.168.10.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
5. 重启 nova 计算服务
sysmtectl restart openstack-nova-compute.service
6. 启动 neutron linuxbridge 服务
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
7. neutron 控制端验证计算节点是否注册成功
neutron agent-list
8. 验证 neutron server 进程是否正常运行
openstack extension list --network
部署控制台服务 horizon
horizon 基于 django 开发,通过 Apache 的 wsgi 模块进行 web 访问通信,Horizon 只需要更改配置文件连接到 keystone 即可。
1. 安装 horizon
yum install -y openstack-dashboard
2. 编辑配置文件 vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.10.100"
ALLOWED_HOSTS = ['*',]
#配置memcache会话保持
SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #添加此行
CACHES = { #取消之前的注释
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.10.100:11211',
},
}
#启动第三方 API 认证
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACH_HOST
#启用对多域的支持
OPENSTACK_KEYSTONE_MUULTIDOMAIN_SUPPORT = True
#配置API版本
OPENSTACK_API_VERSIONS = {
## "data-processing" : 1.1,
"identity" : 3,
"image" : 2,
"volume" : 2,
## "compute" : 2,
}
#配置默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
#配置web界面创建的用户默认权限
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
#单一扁平网络模式下,禁用第三层网络
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False
}
#配置时区
TIME_ZONE = "Asia/Shanghai"
3. 重启web服务
systemctl restart httpd
4. 配置 haproxy 代理 vim /etc/haproxy/haproxy.cfg
listen horizon
bind 192.168.10.100:80
mode tcp
log global
balance source
server neutron-server 192.168.10.201:80 check inter 5000 rise 3 fall 3
5. 重启 haproxy
systemctl restart haproxy
6. 访问web界面:http://192.168.10.100/dashboard
创建虚拟机
1. 创建桥接网络
openstack network create --share --external --provider-physical-network internal --provider-network-type flat internal-net
## --share 在项目之间共享
## --external 外部网络
/etc/neutron/plugins/ml2/ml2_conf.ini
控制端专有/etc/neutron/plugins/ml2/linuxbridge_agent.ini
控制端和计算节点共有
2. 创建子网
openstack subnet create --network internal-net --allocation-pool start=192.168.10.101,end=192.168.10.150 --dns-nameserver 202.106.0.20 --gateway 192.168.10.2 --subnet-range 192.168.10.0/24 internal
3. 验证网络
openstack network list
openstack subnet list
neutron net-list
neutron subnet-list
4. 创建虚拟机类型
#测试 cirros 镜像
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
#实例名称 m1.nano
5. 实现免密登录
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
6. 验证 key
openstack keypair list
7. 创建安全组
openstack security group rule create --proto icmp default
8. 添加规则
openstack security group rule create --proto tcp --dst-port 22 default
9. 最终验证
#列出虚拟机类型
openstack flavor list
#列出可用镜像
openstack image list
#列出可用网络
openstack network list
#列出可用安全组
openstack security group list
以上验证必须全部可用,才可以启动虚拟机!
10. 启动虚拟机
openstack server create --flavor m1.nano --image cirros --nic net-id=xxxxxxxxxxxx --security-group default --key-name mykey test-vm
#net-id 通过 openstack network list 查看
#test-vm 虚拟机名称
11. 查看虚拟机
openstack server list
12. 查看虚拟机访问地址
openstack console url show test-vm
快速添加计算节点
准备工作:yum仓库、防火墙、selinux、主机名、时间同步 等配置完毕。
1. 安装服务
yum install -y net-tools vim lrzsz tree screen lsof tcpdump
yum intstall -y centos-release-openstack-ocata.noarch
yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y \
python-openstackclient \
openstack-selinux \
openstack-neutron-linuxbridge \
ebtables \
ipset
2. 拷贝配置文件至新主机
/etc/neutron/neutron.conf
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
/etc/nova/nova.conf
## 修改vncserver_proxyclient_address=192.168.10.203为新主机IP
3. 启动服务
systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service
systemctl restart neutron-linuxbridge-agent libvirtd.service
4. 控制端验证 nova、neutron 注册
nova service-list
neutron agent-list
实现内外网结构
1. 控制节点配置
- /etc/neutron/plugins/ml2/linuxbridge_agent.ini 当前全部配置:
physical_interface_mappings = internal:eth0,external:eth1 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver enable_security_group = true enable_vxlan = false
- /etc/neutron/plugins/ml2/ml2_conf.ini 当前全部配置:
type_drivers = flat,vlan tenant_network_types = mechanism_drivers = linuxbridge extension_drivers = port_security flat_networks = internal,external enable_ipset = true
- 重启 neutron 服务
systemctl restart neutron-linuxbridge-agent systemctl restart neutron-server
2. 计算节点配置
- /etc/neutron/plugins/ml2/linuxbridge_agent.ini 当前全部配置:
physical_interface_mappings = internal:eth0,external:eth1 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver enable_security_group = true enable_vxlan = false
- 重启 neutron 服务
systemctl restart neutron-linuxbridge-agent
3. 控制节点创建网络
netron net-create --shared --provider:physical_network external --provider:network_type flat external-net
netron subnet-create --name external-subnet --allocation-pool start=10.10.10.100,end=10.10.10.200 --dns-nameserver 114.114.114.114 external-net 10.10.10.0/24
4. 验证子网创建
neutron net-list
部署块存储 cinder
Openstack 从 Folsom 版本开始使用 Cinder 替代原来的 Nova-Volume 服务,为 Openstack 提供块存储服务。
Cinder 接口提供了一些标准功能,允许创建和附加块设备到虚拟机(如:创建卷、附加卷、删除卷等),还有更多高级的功能,支持扩展容量的能力,快照和创建虚拟机镜像克隆,主要涉及到的组件如下:
cinder-api
:接受 API 请求,并将其路由到 "cinder-volume" 执行,即请求cinder要先请求此API;cinder-volume
:与块存储服务和 cinder-scheduler 这样的进程直接交互,也可以与这些进程通过消息队列进行交互。cinder-volume服务响应送到块存储服务的读写请求来维持状态。cinder-scheduler
:守护进程,选择最优存储提供节点来创建卷。其与 "nova-scheduler" 组件类似。cinder-backup
:守护进程,提供任何种类备份卷到一个备份存储提供者。消息队列
:在块存储的进程之间路由信息。
1. 准备数据库
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder123';
FLUSH PRIVILEGES;
2. 控制端 cinder 服务注册
- 创建 cinder 用户并授权
source admin-ocata.sh openstack user create --domain default --password-prompt cinder
- 创建 cinder 服务
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
- 注册v2版本 endpoint
#公共 openstack endpoint create --region RegionOne volumev2 public http://192.168.10.100:8776/v2/%\(project_id\)s #私有 openstack endpoint create --region RegionOne volumev2 internal http://192.168.10.100:8776/v2/%\(project_id\)s #管理 openstack endpoint create --region RegionOne volumev2 admin http://192.168.10.100:8776/v2/%\(project_id\)s
- 注册v3版本 endpoint
#公共 openstack endpoint create --region RegionOne volumev3 public http://192.168.10.100:8776/v3/%\(project_id\)s #私有 openstack endpoint create --region RegionOne volumev3 internal http://192.168.10.100:8776/v3/%\(project_id\)s #管理 openstack endpoint create --region RegionOne volumev3 admin http://192.168.10.100:8776/v3/%\(project_id\)s
3. 配置 haproxy 代理 vim /etc/haproxy/haproxy.cfg
listen cinder
bind 192.168.10.100:8776
mode tcp
log global
balance source
server cinder-server 192.168.10.201:8776 check inter 5000 rise 3 fall 3
4. 控制端安装 cinder 组件
yum install -y openstack-cinder
5. 修改配置文件 vim /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 192.168.10.201
auth_strategy = keystone
transport_url = rabbit://openstack:[email protected]
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[keystone_authtoken]
auth_uri = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
6. 创建库、表
su -s /bin/sh -c "cinder-manage db sync" cinder
7. 控制端重启 nova-api 服务
systemctl restart openstack-nova-api.service
8. 启动 cinder
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
9. 配置计算节点使用 cinder 存储
- 编辑配置文件
vim /etc/nova/nova.conf
[cinder] os_region_name=RegionOne
- 重启 nova 服务
systemctl restart libvirtd.service openstack-nova-compute.service
- 验证 cinder 控制端
openstack volume service list
10. 配置存储节点
这里以存储节点使用lvm为例。
10.1 安装组件
yum install -y openstack-cinder tragetcli python-keystone
10.2 编辑配置文件 vim /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 192.168.10.205
glance_api_servers = http://192.168.10.100:9292
auth_strategy = keystone
enabled_backends = lvm
transport_url = rabbit://openstack:[email protected]
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[keystone_authtoken]
auth_uri = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name=Openstack-lvm
10.3 启动 cinder 服务
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
10.4 控制端验证 cinder 注册
openstack volume service list
11. 使用 NFS 作为 Openstack 后端存储
11.1 安装 nfs 服务器
yum install nfs-utils rpcbind
mkdir /nfsdata/
echo '/nfsdata *(rw,no_root_squash)' >> /etc/exports
systemctl start nfs
systemctl enable nfs
11.2 编辑 cinder 主配置文件 vim /etc/cinder/cinder.conf
enabled_backends = nfs
[nfs]
volume_backend_name = openstack-NFS #定义名称,后面做关联的时候使用
volume_driver = cinder.volume.drivers.nfs.NfsDriver #驱动
nfs_shares_config = /etc/cinder/nfs_shares #定义 NFS 挂载的配置文件路径
nfs_mount_point_base = $state_path/mnt #定义 NFS 挂载点
11.3 创建 nfs 挂载配置文件
echo '192.168.10.205:/nfsdata' > /etc/cinder/nfs_shares
chown root.cinder /etc/cinder/nfs_shares
systemctl restart openstack-cinder-volume.service
11.4 验证 nfs
cinder service-list
11.5 创建磁盘类型并关联
否则在 Openstack 管理界面创建磁盘的时候,不能选择时 NFS 还是其他类型。
#创建类型
cinder type-create lvm
cinder type-create nfs
#关联
source admin-ocata.sh
cinder type-key lvm set volume_backend_name=Openstack-lvm
cinder type-key nfs set volume_backend_name=openstack-NFS
实现 VPC 自定义网络
专用网络 VPC(Virtual Private Clude)是一个互相隔离的网络环境,每个专有网络之间逻辑上彻底隔离,可以自己选择自己的IP地址范围、划分网段、配置路由表和网关等,从而实现安全而轻松的资源访问和应用程序访问。
1. 安装相关软件
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
2. 编辑配置文件 vim /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstarck:[email protected]
[database]
connection = mysql+pymysql://neutron:[email protected]/neutron
[keystone_authtoken]
auth_uri = http://192.168.10.100:5000
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://192.168.10.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3. 配置 ML2 插件 vim /etc/neutron/plugins/ml2/ml2_conf.ini
[Default]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = internal
[ml2_type_vxlan]
vni_ranges = 1:1000 #vxlan范围
[securitygroup]
enable_ipset = true
4. 配置 linuxbridge 代理 vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = internal:eth0,external:eth1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.201
l2_population = true
5. 配置三层路由代理 vim /etc/neutron/l3_agent.ini
interface_driver = linuxbridge
6. 配置DHCP代理 vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
7. 启动三层网络转发服务
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
8. 配置元数据代理 vim /etc/neutron/metadata_agent.ini
nova_metadata_ip = 192.168.10.100
metadata_proxy_shared_secret = 123456
9. 配置 nova 使用网络 vim /etc/nova/nova.conf
[neutron]
url = http://192.168.10.100:9696
auth_url = http://192.168.10.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
10. 重启控制端服务
systemctl enable \
openstack-nova-api.service \
neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service \
neutronl3-agent.service
reboot
11. 验证控制端
source admin-ocata.sh
openstack network agent list
12. 安装计算节点
yum install -y openstack-neutron-linuxbridge ebtables ipset
13. 编辑配置文件 vim /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
transport_url = rabbit://openstack:[email protected]
[keystone_authtoken]
auth_uri = http://192.168.10.100:500
auth_url = http://192.168.10.100:35357
memcached_servers = 192.168.10.100:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
14. 配置 nova 使用 neutron vim /etc/nova/nova.conf
[neutron]
url = http://192.168.10.100:9696
auth_url = http://192.168.10.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
15. 配置 linuxbridge 代理 vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = internal:eth0,external:eth1
enable_vxlan = true
local_ip = 192.168.10.202
l2_population = true
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
16. 复制配置文件至其它计算节点
将 neutron.conf
、nova.conf
、linuxbridge_agent.ini
复制到其它计算节点。
- 在目标计算节点修改 local_ip 为本机IP:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
local_ip = 192.168.10.206
- 在目标计算节点修改 vnc 代理监听地址:
vim /etc/nova/nova.conf
vncserver_proxyclient_address=192.168.10.206
17. 各计算节点重启服务
systemctl enable openstack-nova-compute.service
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
18. 验证
source admin-ocata.sh
## 验证 neutron 进程
openstack extension list --network
## 验证 neutron-agent
neutron agent-list
19. 创建自服务网络
- 查看当前网络:
openstack network list
- 创建自服务网络:
source admin-ocata.sh openstack network create selfnetwork
- 创建自定义子网:
openstack subnet create --network selfnetwork --dns-nameserver 8.8.8.8 --gateway 172.16.1.1 --subnet-range 172.16.1.0/24 selfnetwork-net
- 创建路由器:
openstack router create selfrouter
- 添加内网子网到路由
neutron router-interface-add selfrouter selfnetwork-net
- 设置路由器网关
neutron router-gateway-set selfrouter internal-net
20. 配置 horizen 支持三层网络 vim /etc/openstack-dashboard/local_settings
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': True,
'enable_ha_router': True,
'enable_lb': True,
'enable_firewall': True,
'enable_vpn': True,
'enable_fip_topology_check': True,
...
21. 重启 httpd 服务
systemctl restart httpd
22. 验证
- 验证子网:
openstack network list
- 验证网络命名空间: 控制端有一个qrouter命名空间,每个节点由一个qdhcp命名空间
ip netns
- 列出路由器端口:
neutron router-port-list router
Openstack 镜像制作
做镜像就是在宿主机最小化安装系统并配置优化,之后将虚拟机关机,然后将虚拟机磁盘文件上传至 glance 即可。
1. 网络环境准备
- 安装网卡桥接工具:
yum install bridge-utils -y
- bond0 配置:
vim /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static NAME=bond0 DEVICE=bond0 ONBOOT=yes BONDING_MASTER=yes BONDING_OPTS="mode=1 miimon=100" #指定绑定类型为1及链路状态监测间隔时间 BRIDGE=br0 #桥接到br0
- br0 配置:
vim /etc/sysconfig/network-scripts/ifcfg-br0
TYPE=Bridge BOOTPROTO=static NAME=br0 DEVICE=br0 ONBOOT=yes IPADDR=192.168.10.50 NETMASK=255.255.255.0 GATEWAY=192.168.10.2 DNS1=202.106.0.20
- bond1 配置:
vim /etc/sysconfig/network-scripts/ifcfg-bond1
BOOTPROTO=static NAME=bond1 DEVICE=bond1 ONBOOT=yes BONDING_MASTER=yes BONDING_OPTS="mode=1 miimon=100" BRIDGE=br1
- br1 配置:
vim /etc/sysconfig/network-scripts/ifcfg-br1
TYPE=Bridge BOOTPROTO=static NAME=br1 DEVICE=br1 ONBOOT=yes IPADDR=192.168.20.50 NETMASK=255.255.255.0
2. 安装图形界面支持
yum groupinstall "GNOME Desktop" -y
3. 重启系统后,安装基础环境
yum install -y qemu-kvm qemu-kvm-tools libvirt virt-manager virt-install
4. 创建磁盘
qemu-img create -f qcow2 /var/lib/libvirt/images/CentOS-7-x86_64.qcow2 10G #使用 qcow2 格式,随使用量动态增长
5. 下载 ISO 镜像并安装
virt-install --virt-type kvm --name CentOS7-x86_64 --ram 1024 --cdrom=/opt/CentOS-7-x86_64-Minimal-1511.iso --disk path=/var/lib/libvirt/images/CentOS7-x86_64.qcow2 --network bridge=br0 --graphics vnc,listen=0.0.0.0 --noautoconsole
6. 使用vnc连接虚拟机,并完成安装
- 安装完成后,给虚拟机新添加一块网卡,最终实现镜像虚拟机有两块网卡。
virt-manager #使用虚拟机管理器添加网卡
- 更改yum源:
yum install -y wget wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
- 安装常用命令:
yum install -y net-tools vim lrzsz tree screen lsof ntpdate telnet acpid
- 关闭防火墙及selinux:
systemctl disable NetworkManager systemctl disbale firewalld sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
- 更改主机名(略)
- 免秘钥登录(略)
- 修改网卡的 mtu ,否则 ssh 无法连接
vim /etc/rc.d/rc.local
ifconfig eth0 mtu 1450 ifconfig eth1 mtu 1450
6. 关机,复制镜像至控制端
cd /var/lib/libvirt/images/
scp CentOS-7-x86_64.qcow2 192.168.10.201:/root/
7. 上传镜像至 glance
source admin-ocata.sh
openstack image create "CentOS-7-x86_64-template" --file /root/CentOS-7-x86_64.qcow2 --disk-format qcow2 --container-format bare --public
8. 验证镜像
openstack image list
制作 WIN2008 R2 镜像
- 创建系统磁盘:
qemu-img create -f qcow2 /os/images/Windows-2008-r2-x86_64.qcow2 20G
- 安装
virt-install \ --virt-type kvm \ --name Windwos-2008_R2-x86_64 \ --ram 1024 \ --cdrom=/os/iso/windows_server_2008_r2.iso \ --disk path=/os/images/Windows-2008-r2-x86_64.qcow2 \ --network bridge=br0 \ --graphics vnc,listen=0.0.0.0 \ --noautoconsole
- 安装并设置完成后,使用系统自带工具,重新封装虚拟机
c:\windows\system32\sysprep\sysprep.exe
- 封装完成后,将镜像拷贝至控制端:
scp /os/images/Windwos-2008-r2-x86_64.qcow2 192.168.10.201:/root/
基于官方 GenericCloud 7.2.1511 镜像制作
- 下载官方镜像:
wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1511.qcow2.xz xz -d CentOS-7-x86_64-GenericCloud-1511.qcow2.xz mv CentOS-7-x86_64-GenericCloud-1511.qcow2 /os/images/
- 安装系统:
vir-install \ --virt-type kvm \ --name CentOS-GenericCloud-7.2-x86_64 \ --ram 1024 \ --cdrom=/os/iso/CentOS-7-x86_64-Minimal-1511.iso \ --disk path=/os/images/CentOS-7-x86_64-GenericCloud-1511.qcow2 \ --network bridge=br0 \ --graphics vnc,listen=0.0.0.0 \ --noautoconsole
- 重设密码:
yum install libguestfs-tools virt-customize -a /os/images/CentOS-7-x86_64-GenericCloud-1511.qcow2 --root-password password:123456
- 更改yum源:
yum install -y wget wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
- 安装常用命令:
yum install -y net-tools vim lrzsz tree screen lsof ntpdate telnet tcpdump gcc gcc-c++ pcre pcre-devel zip zip-devel unzip openssl openssl-devel
- 关闭防火墙及selinux:
systemctl disable NetworkManager systemctl disbale firewalld sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
Openstack 企业应用案例
1. quota 相关配置
- 查看当前配额
neutron quota-show admin
- 查看 openstack 配置文件是否开启配额限制
-
web端修改配置,或者修改配置文件
-
控制节点:
#vim /etc/neutron/neutron.conf [quotas] quota_network = 10 quota_subnet = 10 quota_port = 5000 quota_driver = neutron.db.quota.driver.DbQuotaDriver quota_router = 10 quota_floatingip = 1000 quota_security_group = 10 quota_security_group_rule = 100 #重启 neutron 服务 systemctl restart \ openstack-nova-api.service \ neutron-server.service \ neutron-linuxbridge-agent.service \ neutron-dhcp-agent.service \ neutron-metadata-agent.service
-
计算节点:
#vim /etc/neutron/neutron.conf [quotas] quota_network = 10 quota_subnet = 10 quota_port = 5000 quota_driver = neutron.db.quota.driver.DbQuotaDriver quota_router = 10 quota_floatingip = 1000 quota_security_group = 10 quota_security_group_rule = 100 #重启 neutron 服务 systemctl restart neutron-linuxbridge-agent.service
-
- 验证当前配额
neutron quota-show service
2. 修改实例IP
- 找出实例ID:
openstack port list | grep 192.168.10.103
- 在数据库中查找实例ID的条目:
USE neutron; #查看网络端口ID SELECT * FROM ports WHERE device_id="xxxxxxxxxxx"; #验证虚拟机IP地址和ID对应关系 SELECT * FROM ipallocations WHERE port_id="xxxxxxxxxxx";
- 修改数据库中的
ip_address
字段:UPDATE ipallocations SET ip_address="192.168.10.104" WHERE port_id="xxxxxxxxxxx";
- 生效:
FLUSH PRIVILEGES;
- 在实例中修改IP:
vim /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.10.104
3. keepalived+haproxy VIP 配置
两个实例的IP为:192.168.10.105、192.168.10.111,VIP为:192.168.10.160
- 将 VIP 关联至安全组:
neutron port-create --fixed-ip ip_address=192.168.10.160 --security-group <安全组ID或名称> <网络ID或名称>
- 列出各实例的portID:
openstack port list | grep 192.168.10.105 openstack port list | grep 192.168.10.111
- 将VIP关联到实例:
neutron port-update <105的portID> --allowed_address_pairs list=true type=dict ip_address=192.168.10.160 neutron port-update <111的portID> --allowed_address_pairs list=true type=dict ip_address=192.168.10.160
- keepalived 使用 VRRP 协议,需要在 openstack 安全组策略单独进行开放。入口、出口规则,放开
IP协议 112
即可。 - 配置内核参数
vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1
启用:
sysctl -p
keepalived MASTER 配置
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_route_id 1
priority 100
advert_int 1
unicast_src_ip 192.168.10.105
unicast_peer {
192.168.0.111
}
authentication {
auth_type PASS
auth_pass 3344512
}
virtual_ipaddress {
192.168.10.160/24 dev eth0 label eth0:0
}
}
keepalived BACKUP 配置
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_route_id 1
priority 50
advert_int 1
unicast_src_ip 192.168.10.111
unicast_peer {
192.168.0.105
}
authentication {
auth_type PASS
auth_pass 3344512
}
virtual_ipaddress {
192.168.10.160/24 dev eth0 label eth0:0
}
}
haproxy 配置
global
maxconn 100000
uid 99
gid 99
daemon
nbproc 1
log 127.0.0.1 local0 info
defaults
option redispatch
option abortonclose
option http-keep-alive
otion forwardfor
maxconn 100000
mode http
#=============
fronted web
bind 192.168.10.160:80
mode http
default_backend web_http_nodes
backend web_http_nodes
mode http
balance roundrobin
server web1 192.168.10.105:80 check inter 2000 fall 3 rise 5
server web2 192.168.10.111:80 check inter 2000 fall 3 rise 5
Openstack 相关优化
1. 配置虚拟机自启动
控制端和计算节点的 /etc/nova/nova.conf
进行如下配置:
resume_guests_state_on_host_boot=true
2. 配置CPU超限使用
默认为16,即允许开启16倍于物理CPU的虚拟CPU个数。
cpu_allocation_ratio=16
3. 配置内存超限使用
ram_allocation_ratio=1.5 #允许1.5倍于物理内存的虚拟内存
4. 配置磁盘超限使用
磁盘最好不要超限,否则可能导致数据丢失!
disk_allocation_ratio=1.0
5. 配置预留磁盘空间
reserved_host_disk_mb=20480
6. 配置预留内存
reserved_host_memory_mb=4096
7. 配置虚拟机类型动态调整
在有些时候,创建完的虚拟机,因为业务需要变更内存、cpu、磁盘,因此需要配置允许后期类型调整。
-
修改
nova.conf
配置:allow_resize_to_same_host=true baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
-
在各结算节点添加nova用户,并配置SSH免秘钥认证,确保各个计算节点可以互相登录。
-
在web中调整实例大小。磁盘只能增大,CPU和MEMORY可以增加或减小。
Openstack 快速部署工具
- fuel
- devstack