Dreamhost’s Caleb Boylan shows how to set up a multi-node Devstack with Manila and Ceph as a storage back end.

Manila is an OpenStack service that provides filesystem shares as a service. Manila allows you to create filesystems like CIFS or NFS without having to worry about the block device backing it. Manila also makes it possible to have multiple clients accessing the same data. Differently from OpenStack Cinder volumes, which can only be attached to one virtual machine at the time, Manila creates shareable volumes using multiple filesystem standards. The standard Manila driver supports NFS and CIFS, and it can also create shares with GlusterFS, HDFS, CephFS and MapRFS. Manila doesn’t replace Cinder, but is another piece that makes your cloud more useful.

To get started with any OpenStack service, the best path is to deploy a development environment using Devstack. Follow the instructions below to setup a multi-node Devstack with Manila and Ceph as a storage back end.

Prep work

First, you need two servers to run Devstack on: these servers must have at least 8GB of RAM. You can set up Manila on a single server if you want, but you will be unable to test live migration. It’s important to note that Devstack will install packages system wide, so it’s better to run Devstack on a server whose only purpose is to run Devstack.

These servers need to talk to each other so open all ports between the two of them. Add each node’s IP and its name to each nodes /etc/hosts file so that each node can resolve its own and the other’s IP address using their names.

Finally, this has been tested on Ubuntu Xenial; other Linux distributions may or may not work.

Getting started

The first step is to create a user to run Devstack as and download Devstack. Do this by running the following on both nodes:

[[email protected]]$ sudo useradd -s /bin/bash -d /opt/stack -m stack
[[email protected]]$ echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
[[email protected]]$ sudo su - stack
[[email protected]]$ git clone https://git.openstack.org/openstack-dev/devstack[[email protected]]$ sudo useradd -s /bin/bash -d /opt/stack -m stack
[[email protected]]$ echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
[[email protected]]$ sudo su - stack
[[email protected]]$ git clone https://git.openstack.org/openstack-dev/devstack

Configuring the controller

After cloning the Devstack repository you must configure it. First configure the controller, the node that will run all of the OpenStack control services, such as the database and message queue. Use the sample local.conf below for the content of /home/devstack/local.conf:

[[local|localrc]]
#######
# MISC #
########
FLAT_INTERFACE=INTERFACE_NAME
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
ADMIN_PASSWORD=secretpassword
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
#SERVICE_TOKEN = <this is generated after running stack.sh>

# Reclone each time
#RECLONE=yes

# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
#################
# PRE-REQUISITE #
#################
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,horizon,n-novnc

##########
# Manila #
##########
enable_plugin manila https://github.com/openstack/manila
enable_plugin manila-ui https://github.com/openstack/manila-ui

########
# CEPH #
########
enable_plugin devstack-plugin-ceph https://github.com/openstack/devstack-plugin-ceph

# DevStack will create a loop-back disk formatted as XFS to store the
# Ceph data.
CEPH_LOOPBACK_DISK_SIZE=30G

# Ceph cluster fsid
CEPH_FSID=$(uuidgen)

# Glance pool, pgs and user
GLANCE_CEPH_USER=glance
GLANCE_CEPH_POOL=images
GLANCE_CEPH_POOL_PG=8
GLANCE_CEPH_POOL_PGP=8

# Nova pool and pgs
NOVA_CEPH_POOL=vms
NOVA_CEPH_POOL_PG=8
NOVA_CEPH_POOL_PGP=8

# Cinder pool, pgs and user
CINDER_CEPH_POOL=volumes
CINDER_CEPH_POOL_PG=8
CINDER_CEPH_POOL_PGP=8
CINDER_CEPH_USER=cinder
CINDER_CEPH_UUID=$(uuidgen)

# Cinder backup pool, pgs and user
CINDER_BAK_CEPH_POOL=backup
CINDER_BAK_CEPH_POOL_PG=8
CINDER_BAKCEPH_POOL_PGP=8
CINDER_BAK_CEPH_USER=cinder-bak

# How many replicas are to be configured for your Ceph cluster
CEPH_REPLICAS=${CEPH_REPLICAS:-1}

# Connect DevStack to an existing Ceph cluster
REMOTE_CEPH=False
REMOTE_CEPH_ADMIN_KEY_PATH=/etc/ceph/ceph.client.admin.keyring

#####################
## GLANCE – IMAGE SERVICE #
###########################
ENABLED_SERVICES+=,g-api,g-reg

##################################
## CINDER – BLOCK DEVICE SERVICE #
##################################
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
CINDER_DRIVER=ceph
CINDER_ENABLED_BACKENDS=ceph

###########################
## NOVA – COMPUTE SERVICE #
###########################
ENABLED_SERVICES+=,n-api,n-api-meta,n-cauth,n-crt,n-cpu,n-cond,n-sch,placement-api
LIBVIRT_TYPE=qemu

#NEUTRON
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas

Replace “INTERFACE_NAME” with the networking interface that connects to the internet for your server. Once you have done this you are ready to run `./stack.sh` as the stack user to start your controller. This will take a while to finish, upwards of 30 minutes usually.

Next you need to configure the Manila services to use password authentication into the service VMs and not ssh-keys. First open the /etc/manila/manila.conf file in an editor and comment these lines under the [generic1] section:

#path_to_private_key = /opt/stack/.ssh/id_rsa
#path_to_public_key = /opt/stack/.ssh/id_rsa.pub

and add the following under the [generic1] section:

service_instance_password = manila

Then restart the Manila devstack services:

[[email protected]]$ sudo systemctl restart [email protected]\*

If you intend on running a single node Devstack you are done at this point, skip to the end of this article. If not, continue by configuring your compute node.

Configuring the compute node

Configure your compute node by adding the following to your /home/devstack/devstack.local.conf file:

[[local|localrc]]
FLAT_INTERFACE=INTERFACE_NAME
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.42.128/25
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=secretpassword
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
DATABASE_TYPE=mysql
SERVICE_HOST=CONTROLLER_NODE_IP
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
ENABLED_SERVICES=n-cpu,q-agt,n-api-meta,c-vol,placement-client,n-cauth
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
LIBVIRT_TYPE=qemu

Be sure to substitute “CONTROLLER_NODE_IP” for the IP address of your controller node and “INTERFACE_NAME” with the networking interface that connects to the internet for your server.

Creating SSH keys for live migration

The next step is to create an ssh-key on each of your nodes as root:

[[email protected]]$ ssh-keygen -t rsa -b 4096
[[email protected]]$ ssh-keygen -t rsa -b 4096

Then hit enter for all of the questions, this will create a key without a password in the default location. Next copy each node’s host key to the other node using the following:

[[email protected]]$ ssh-keyscan -H devstack2 | sudo tee -a /root/.ssh/known_hosts
[[email protected]]$ ssh-keyscan -H devstack1 | sudo tee -a /root/.ssh/known_hosts

Now copy the public key from each node onto the other node’s stack and root users’ authorized_keys file. This can be done by copying the contents of the .ssh/id_rsa.pub files into the .ssh/authorized_keys file for each user, or by running the following command if you have password auth setup:

[[email protected]]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[[email protected]]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[[email protected]]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[[email protected]]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

This is necessary for live migration of instances to work properly. Now run the `./stack.sh` script as the stack user. It will take a while to complete on the compute node as well, but should be a bit quicker than the controller. After that completes we need to do two things, configure Libvirt to talk to Ceph and add our compute node to the cluster

Configuring Libvirt on the compute node

First copy the Ceph configurations from your controller to your compute node:

[[email protected]]$ scp -r /etc/ceph devstack2:/etc/

Next copy the libvirt secrets to the compute nodes:

[[email protected]]$ mkdir /etc/libvirt/secrets
[[email protected]]$ scp /etc/libvirt/secrets/*.xml devstack2:/etc/libvirt/secrets/

Now install the ceph-common package that will provide some basic ceph client commands. Then configure libvirt to know how to talk to Ceph.

[[email protected]]$ apt install ceph-common
[[email protected]]$ cd /etc/libvirt/secrets
[[email protected]]$ sudo virsh secret-define --file $(CINDER_CEPH_UUID}.xml
[[email protected]]$ sudo virsh secret-set-value --secret ${CINDER_CEPH_UUID} \
--base64 $(sudo ceph -c /etc/ceph/ceph.conf \
auth get-key client.cinder)

Next configure Cinder to talk to Ceph, do this by copying the [ceph] section of your controller node’s /etc/ceph/ceph.conf to your compute node. Then configure Nova to talk to Ceph by copying the [libvirt] section of your controller node to your compute node, replacing the existing [libvirt] section.

Then restart the nova and cinder services on your compute node with the following:

[[email protected]]$ systemctl restart [email protected]\*
[[email protected]]$ systemctl restart [email protected]\*

Adding the compute node to the cluster

After your compute node is running. Run the following on your controller to add the compute nodes to the cluster:

[[email protected]]$ devstack/tools/discover_hosts.sh

Now you have devstack running with Manila and Ceph.

Testing Manila

The easiest way to test Manila is to create a share. Start by sourcing your cloud credential:

[[email protected]]$ . ~/devstack/openrc

And get a list of networks available:

[[email protected]]$ neutron net-list

Next, create a Manila share on the network:

[[email protected]]$ manila share-network-create --neutron-net-id <PRIVATE_NET_ID> --neutron-subnet-id <PRIVATE_SUBNET_ID> --name manila-share-network
[[email protected]]$ manila share-network-list
[[email protected]]$ manila create --name share1 --share-network <SHARE_NET_ID> NFS <SIZE_IN_GB>

This will create a  volume share using NFS and the new share network. Now check its status using:

[[email protected]]$ manila list

If the share’s status reaches “available” it is ready to be used. If it enters the “error” state, it has failed and it is possible your devstack configuration is broken.

Note: These instructions were tested with devstack commit  0d9c896cddbb3660cad342d44770af1ac2ec1365.

Superuser is always interested in community content, get in touch: editorATopenstack.org

Cover Photo // CC BY NC