Fabian Arrotin checks out a replacement for Gluster.

image

As announced already, I have been (between other things) playing with Openstack/RDO and deployed some small openstack setups in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6,and also using Gluster as backend for the VM store.

That’s when I found out that Gluster isn’t a valid option anymore: Gluster was deprecated and even removed from Cinder. Sad, as one advantage of Gluster is that you could (you had to!) user libgfapi so that qemu-kvm process could talk directly to Gluster through ligbfapi and not accessing VM images over locally mounted Gluster volumes (please, don’t even try to do that, through Fuse).

So what could be a replacement for Gluster from an OpenStack side? I still have some dedicated nodes for storage –backends– but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking, the driver was removed from Cinder, but I could have only tried to use it for Glance and Nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades.)

It’s not that I’m a fan of storing qcow2 images on top of NFS, but it seems it was my only option and at least the most transparent/less intrusive path, would I need to migrate to something else later. So let’s test this before then using NFS through Infiniband (using IPoIB), and so at “good speed” (still have theInfiniBand hardware in place running for Gluster that will be replaced.)

It’s easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.

That’s where you have to see what would be blocked by SELinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn’t seem to allow that.

I initially tested services one and one and decided to open Pull Request for this, but in the mean time I rebuilt a custom SELinux policy that seems to do the job in my RDO playground.

Here it is the .te that you can compile into usable .pp policy file :

module os-local-nfs 0.2;

require {
    type glance_api_t;
    type virtlogd_t;
    type nfs_t;
    class file { append getattr open read write unlink create };
    class dir { search getattr write remove_name create add_name };
}

#============= glance_api_t ==============
allow glance_api_t nfs_t:dir { search getattr write remove_name create add_name };
allow glance_api_t nfs_t:file { write getattr unlink open create read};

#============= virtlogd_t ==============
allow virtlogd_t nfs_t:dir search;
allow virtlogd_t nfs_t:file { append getattr open };

Of course you also need to enable some Booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled Booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need virt_use_nfs=1

Now that it works, I can replay that (all that coming from Puppet) on the DevCloud nodes.

This tutorial first appeared on Fabian Arrotin’s blog.

 

Superuser is always interested in community content — get in touch: editorATopenstack.org

Cover Photo // CC BY NC