Debian 12 bookworm
Sponsored Link

OpenStack Zed : Use Cinder Storage (NFS)2023/06/29

 
It's possible to use Virtual Storages provided by Cinder if an Instance needs more disks.
Configure Virtual storage with NFS backend on here.
------------+--------------------------+--------------------------+------------
            |                          |                          |
        eth0|10.0.0.30             eth0|10.0.0.50             eth0|10.0.0.51
+-----------+-----------+  +-----------+-----------+  +-----------+-----------+
|   [ dlp.srv.world ]   |  | [ network.srv.world ] |  |  [ node01.srv.world ] |
|     (Control Node)    |  |     (Network Node)    |  |     (Compute Node)    |
|                       |  |                       |  |                       |
|  MariaDB    RabbitMQ  |  |  Neutron L2/L3 Agent  |  |        Libvirt        |
|  Memcached  Nginx     |  |   Neutron Metadata    |  |      Nova Compute     |
|  Keystone   httpd     |  |     Open vSwitch      |  |    Neutron L2 Agent   |
|  Glance     Nova API  |  |     iSCSI Target      |  |      Open vSwitch     |
|  Neutron Server       |  |     Cinder Volume     |  |                       |
|  Neutron Metadata     |  |                       |  |                       |
|  Cinder API           |  |                       |  |                       |
+-----------------------+  +-----------------------+  +-----------------------+

-----------+-------------------------------------------------------------------
       eth0|10.0.0.35
+----------+-----------+
|   [ nfs.srv.world ]  |
|       NFS Server     |
+----------------------+

[1]
NFS server is required to be running on your local network, refer to here.
On this example, configure [/var/lib/nfs-share] directory on [nfs.srv.world] as a shared directory.
[2] Configure Storage Node.
root@network:~#
apt -y install nfs-common
root@network:~#
vi /etc/idmapd.conf
# line 5 : uncomment and change to your domain name

Domain =
srv.world
root@network:~#
vi /etc/cinder/cinder.conf
# add the value to [enabled_backends] param

enabled_backends =
nfs
# add to the end

[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = $state_path/mnt
root@network:~#
vi /etc/cinder/nfs_shares
# create new : specify NFS shared directory
# if set multiple share, write one per line

nfs.srv.world:/var/lib/nfs-share
root@network:~#
chmod 640 /etc/cinder/nfs_shares

root@network:~#
chgrp cinder /etc/cinder/nfs_shares

root@network:~#
systemctl restart cinder-volume

root@network:~#
chown -R cinder:cinder /var/lib/cinder/mnt
[3] Change Nova settings on Compute Node to mount NFS.
root@node01:~#
apt -y install nfs-common
root@node01:~#
vi /etc/idmapd.conf
# line 5 : uncomment and change to your domain name

Domain =
srv.world
root@node01:~#
vi /etc/nova/nova.conf
# add lines under [keystone_authtoken] section
[keystone_authtoken]
.....
.....
service_token_roles = service
service_token_roles_required = true

# add to the end
[service_user]
send_service_user_token = true
auth_url = https://dlp.srv.world:5000
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = servicepassword
insecure = false

[cinder]
os_region_name = RegionOne

root@node01:~#
systemctl restart nova-compute
[4] Login as a common user you'd like to add volumes to own instances.
For example, create a virtual disk [disk01] with 10GB. It's OK to work on any node. (example below is on Control Node)
# set environment variable

debian@dlp ~(keystone)$
echo "export OS_VOLUME_API_VERSION=3" >> ~/keystonerc

debian@dlp ~(keystone)$
source ~/keystonerc
debian@dlp ~(keystone)$
openstack volume create --size 10 disk01

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2023-06-29T04:28:25.400067           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 7602c918-c559-4176-af93-dfa6789b5f82 |
| multiattach         | False                                |
| name                | disk01                               |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 10                                   |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | __DEFAULT__                          |
| updated_at          | None                                 |
| user_id             | de51d5f0ee2c485885877d21f5b424e0     |
+---------------------+--------------------------------------+

debian@dlp ~(keystone)$
openstack volume list

+--------------------------------------+--------+-----------+------+-------------+
| ID                                   | Name   | Status    | Size | Attached to |
+--------------------------------------+--------+-----------+------+-------------+
| 7602c918-c559-4176-af93-dfa6789b5f82 | disk01 | available |   10 |             |
+--------------------------------------+--------+-----------+------+-------------+
[5] Attach the virtual disk to an Instance.
For the example below, the disk is connected as [/dev/vdb]. It's possible to use it as a storage to create a file system on it.
debian@dlp ~(keystone)$
openstack server list

+--------------------------------------+-----------+---------+------------------------------------+----------+-----------+
| ID                                   | Name      | Status  | Networks                           | Image    | Flavor    |
+--------------------------------------+-----------+---------+------------------------------------+----------+-----------+
| c3a4a792-a7ac-41bd-8c93-7fd162016f22 | Debian-12 | SHUTOFF | private=10.0.0.241, 192.168.100.66 | Debian12 | m1.medium |
+--------------------------------------+-----------+---------+------------------------------------+----------+-----------+

debian@dlp ~(keystone)$
openstack server add volume Debian-12 disk01

+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| ID                    | 7602c918-c559-4176-af93-dfa6789b5f82 |
| Server ID             | c3a4a792-a7ac-41bd-8c93-7fd162016f22 |
| Volume ID             | 7602c918-c559-4176-af93-dfa6789b5f82 |
| Device                | /dev/vdb                             |
| Tag                   | None                                 |
| Delete On Termination | False                                |
+-----------------------+--------------------------------------+

# the status of attached disk turns [in-use] like follows

debian@dlp ~(keystone)$
openstack volume list

+--------------------------------------+--------+--------+------+------------------------------------+
| ID                                   | Name   | Status | Size | Attached to                        |
+--------------------------------------+--------+--------+------+------------------------------------+
| 7602c918-c559-4176-af93-dfa6789b5f82 | disk01 | in-use |   10 | Attached to Debian-12 on /dev/vdb  |
+--------------------------------------+--------+--------+------+------------------------------------+

# detach the disk

debian@dlp ~(keystone)$
openstack server remove volume Debian-12 disk01

Matched Content