taikun.cloud

Taikun OCP Guide

Table of Contents

CephFS driver

The CephFS driver enables manila to export shared filesystems backed
by Ceph’s File System (CephFS) using either the Ceph network protocol or
NFS protocol. Guests require a native Ceph client or an NFS client in
order to mount the filesystem.

When guests access CephFS using the native Ceph protocol, access is
controlled via Ceph’s cephx authentication system. If a user requests
share access for an ID, Ceph creates a corresponding Ceph auth ID and a
secret key if they do not already exist, and authorizes the ID to access
the share. The client can then mount the share using the ID and the
secret key. To learn more about configuring Ceph clients to access the
shares created using this driver, please see the Ceph
documentation

And when guests access CephFS through NFS, an NFS-Ganesha server
mediates access to CephFS. The driver enables access control by managing
the NFS-Ganesha server’s exports.

Supported Operations

The following operations are supported with CephFS backend:

  • Create, delete, update and list share
  • Allow/deny access to share
    • Only cephx access type is supported for CephFS native
      protocol.
    • Only ip access type is supported for NFS protocol.
    • read-only and read-write access levels are
      supported.
  • Extend/shrink share
  • Create, delete, update and list snapshot
  • Create, delete, update and list share groups
  • Delete and list share group snapshots

Important

Share group snapshot creation is no longer supported in mainline
CephFS. This feature has been removed from manila W release.

Prerequisites

Important

A manila share backed by CephFS is only as good as the underlying
filesystem. Take care when configuring your Ceph cluster, and consult
the latest guidance on the use of CephFS in the Ceph
documentation
.

Ceph testing matrix

As Ceph and Manila continue to grow, it is essential to test and
support combinations of releases supported by both projects. However,
there is little community bandwidth to cover all of them. For simplicity
sake, we are focused on testing (and therefore supporting) the current
Ceph active releases. Check out the list of Ceph active releases here.

Below is the current state of testing for Ceph releases with this
project. Adjacent components such as devstack-plugin-ceph
and tripleo
are added to the table below. Contributors to those projects can
determine what versions of ceph are tested and supported with manila by
those components; however, their state is presented here for ease of
access.

Important

From the Victoria cycle, the Manila CephFS driver is not tested or
supported with Ceph clusters older than Nautilus. Future releases of
Manila may be incompatible with Nautilus too! We suggest always running
the latest version of Manila with the latest release of Ceph.

OpenStack release

manila

devstack-plugin-ceph tripleo
Queens Luminous Luminous Luminous
Rocky Luminous Luminous Luminous
Stein Nautilus Luminous, Nautilus Nautilus
Train Nautilus Luminous, Nautilus Nautilus
Ussuri Nautilus Luminous, Nautilus Nautilus
Victoria Nautilus Nautilus, Octopus Nautilus
Wallaby Octopus Nautilus, Octopus Pacific

Additionally, it is expected that the version of the Ceph client
available to manila is aligned with the Ceph server version. Mixing
server and client versions is strongly unadvised.

In case of using the NFS Ganesha driver, it’s also a good practice to
use the versions that align with the Ceph version of choice.

Important

It’s recommended to install the latest stable version of Ceph
Nautilus/Octopus/Pacific release. See, Ceph
releases

Prior to upgrading to Wallaby, please ensure that you’re running at
least the following versions of Ceph:

Release Minimum version
Nautilus 14.2.20
Octopus 15.2.11
Pacific 16.2.1

Common Prerequisites

  • A Ceph cluster with a filesystem configured (See Create ceph
    filesystem
    on how to create a filesystem.)
  • python3-rados and python3-ceph-argparse
    packages installed in the servers running the manila-share service.
  • Network connectivity between your Ceph cluster’s public network and
    the servers running the manila-share service.

For CephFS native shares

  • Ceph client installed in the guest
  • Network connectivity between your Ceph cluster’s public network and
    guests. See security_cephfs_native.

For CephFS NFS shares

  • 3.0 or later versions of NFS-Ganesha.
  • NFS client installed in the guest.
  • Network connectivity between your Ceph cluster’s public network and
    NFS-Ganesha server.
  • Network connectivity between your NFS-Ganesha server and the manila
    guest.

Authorizing the driver to communicate
with Ceph

Capabilities required for the Ceph manila identity have changed from
the Wallaby release. The Ceph manila identity configured no longer needs
any MDS capability. The MON and OSD capabilities can be reduced as well.
However new MGR capabilities are now required. If not accorded, the
driver cannot communicate to the Ceph Cluster.

Important

The driver in the Wallaby (or later) release requires a Ceph identity
with a different set of Ceph capabilities when compared to the driver in
a pre-Wallaby release.

When upgrading to Wallaby you’ll also have to update the capabilities
of the Ceph identity used by the driver (refer to Ceph
user capabilities docs
) E.g. a native driver that already uses client.manila Ceph identity, issue command
ceph auth caps client.manila mon ‘allow r’ mgr
‘allow rw’

For the CephFS Native driver, the auth ID should be set as
follows:

ceph auth get-or-create client.manila -o manila.keyring \
  mgr 'allow rw' \
  mon 'allow r'

For the CephFS NFS driver, we use a specific pool to store exports
(configurable with the config option “ganesha_rados_store_pool_name”).
We also need to specify osd caps for it. So, the auth ID should be set
as follows:

ceph auth get-or-create client.manila -o manila.keyring \
  osd 'allow rw pool=<ganesha_rados_store_pool_name>" \
  mgr 'allow rw' \
  mon 'allow r'

manila.keyring, along with your ceph.conf
file, will then need to be placed on the server running the manila-share service.

Important

To communicate with the Ceph backend, a CephFS driver instance
(represented as a backend driver section in manila.conf) requires its
own Ceph auth ID that is not used by other CephFS driver instances
running in the same controller node.

In the server running the manila-share service, you can place the
ceph.conf and manila.keyring files in the
/etc/ceph directory. Set the same owner for the manila-share process and the
manila.keyring file. Add the following section to the
ceph.conf file.

[client.manila]
client mount uid = 0
client mount gid = 0
log file = /opt/stack/logs/ceph-client.manila.log
admin socket = /opt/stack/status/stack/ceph-$name.$pid.asok
keyring = /etc/ceph/manila.keyring

It is advisable to modify the Ceph client’s admin socket file and log
file locations so that they are co-located with manila services’s pid
files and log files respectively.

Enabling snapshot
support in Ceph backend

CephFS Snapshots were experimental prior to the Nautilus release of
Ceph. There may be some limitations
on snapshots
based on the Ceph version you use.

From Ceph Nautilus, all new filesystems created on Ceph have
snapshots enabled by default. If you’ve upgraded your ceph cluster and
want to enable snapshots on a pre-existing filesystem, you can do
so:

ceph fs set {fs_name} allow_new_snaps true

Configuring CephFS
backend in manila.conf

Configure
CephFS native share backend in manila.conf

Add CephFS to enabled_share_protocols (enforced at
manila api layer). In this example we leave NFS and CIFS enabled,
although you can remove these if you will only use a CephFS backend:

enabled_share_protocols = NFS,CIFS,CEPHFS

Create a section like this to define a CephFS native backend:

[cephfsnative1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNATIVE1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_protocol_helper_type = CEPHFS
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_filesystem_name = cephfs

Set driver-handles-share-servers to False
as the driver does not manage the lifecycle of
share-servers. For the driver backend to expose shares via
the native Ceph protocol, set cephfs_protocol_helper_type
to CEPHFS.

Then edit enabled_share_backends to point to the
driver’s backend section using the section name. In this example we are
also including another backend (“generic1”), you would include whatever
other backends you have configured.

Finally, edit cephfs_filesystem_name with the name of
the Ceph filesystem (also referred to as a CephFS volume) you want to
use. If you have more than one Ceph filesystem in the cluster, you need
to set this option.

enabled_share_backends = generic1, cephfsnative1

Configure
CephFS NFS share backend in manila.conf

Note

Prior to configuring the Manila CephFS driver to use NFS, you must
have installed and configured NFS-Ganesha. For guidance on
configuration, refer to the NFS-Ganesha
setup guide
.

Add NFS to enabled_share_protocols if it’s not already
there:

enabled_share_protocols = NFS,CIFS,CEPHFS

Create a section to define a CephFS NFS share backend:

[cephfsnfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_filesystem_name = cephfs
cephfs_ganesha_server_is_remote= False
cephfs_ganesha_server_ip = 172.24.4.3
ganesha_rados_store_enable = True
ganesha_rados_store_pool_name = cephfs_data

The following options are set in the driver backend section
above:

  • driver-handles-share-servers to False as
    the driver does not manage the lifecycle of
    share-servers.
  • cephfs_protocol_helper_type to NFS to
    allow NFS protocol access to the CephFS backed shares.
  • ceph_auth_id to the ceph auth ID created in authorize_ceph_driver.
  • cephfs_ganesha_server_is_remote to False if the
    NFS-ganesha server is co-located with the manila-share service. If the NFS-Ganesha server is
    remote, then set the options to True, and set other options
    such as cephfs_ganesha_server_ip,
    cephfs_ganesha_server_username, and
    cephfs_ganesha_server_password (or
    cephfs_ganesha_path_to_private_key) to allow the driver to
    manage the NFS-Ganesha export entries over SSH.
  • cephfs_ganesha_server_ip to the ganesha server IP
    address. It is recommended to set this option even if the ganesha server
    is co-located with the manila-share service.
  • ganesha_rados_store_enable to True or False. Setting
    this option to True allows NFS Ganesha to store exports and its export
    counter in Ceph RADOS objects. We recommend setting this to True and
    using a RADOS object since it is useful for highly available NFS-Ganesha
    deployments to store their configuration efficiently in an already
    available distributed storage system.
  • ganesha_rados_store_pool_name to the name of the RADOS
    pool you have created for use with NFS-Ganesha. Set this option only if
    also setting the ganesha_rados_store_enable option to True.
    If you want to use one of the backend CephFS’s RADOS pools, then using
    CephFS’s data pool is preferred over using its metadata pool.

Edit enabled_share_backends to point to the driver’s
backend section using the section name, cephfsnfs1.

Finally, edit cephfs_filesystem_name with the name of
the Ceph filesystem (also referred to as a CephFS volume) you want to
use. If you have more than one Ceph filesystem in the cluster, you need
to set this option.

enabled_share_backends = generic1, cephfsnfs1

Space considerations

The CephFS driver reports total and free capacity available across
the Ceph cluster to manila to allow provisioning. All CephFS shares are
thinly provisioned, i.e., empty shares do not consume any significant
space on the cluster. The CephFS driver does not allow controlling
oversubscription via manila. So, as long as there is free space,
provisioning will continue, and eventually this may cause your Ceph
cluster to be over provisioned and you may run out of space if shares
are being filled to capacity. It is advised that you use Ceph’s
monitoring tools to monitor space usage and add more storage when
required in order to honor space requirements for provisioned manila
shares. You may use the driver configuration option
reserved_share_percentage to prevent manila from filling up
your Ceph cluster, and allow existing shares to grow.

Creating shares

Create CephFS native share

The default share type may have
driver_handles_share_servers set to True. Configure a share
type suitable for CephFS native share:

manila type-create cephfsnativetype false
manila type-key cephfsnativetype set vendor_name=Ceph storage_protocol=CEPHFS

Then create a share,

manila create --share-type cephfsnativetype --name cephnativeshare1 cephfs 1

Note the export location of the share:

manila share-export-location-list cephnativeshare1

The export location of the share contains the Ceph monitor (mon)
addresses and ports, and the path to be mounted. It is of the form,
{mon ip addr:port}[,{mon ip addr:port}]:{path to be mounted}

Create CephFS NFS share

Configure a share type suitable for CephFS NFS share:

manila type-create cephfsnfstype false
manila type-key cephfsnfstype set vendor_name=Ceph storage_protocol=NFS

Then create a share:

manila create --share-type cephfsnfstype --name cephnfsshare1 nfs 1

Note the export location of the share:

manila share-export-location-list cephnfsshare1

The export location of the share contains the IP address of the
NFS-Ganesha server and the path to be mounted. It is of the form,
{NFS-Ganesha server address}:{path to be mounted}

Allowing access to shares

Allow access to CephFS
native share

Allow Ceph auth ID alice access to the share using
cephx access type.

manila access-allow cephnativeshare1 cephx alice

Note the access status, and the access/secret key of
alice.

manila access-list cephnativeshare1

Allow access to CephFS NFS
share

Allow a guest access to the share using ip access
type.

manila access-allow cephnfsshare1 ip 172.24.4.225

Mounting CephFS shares

Mounting CephFS
native share using FUSE client

Using the secret key of the authorized ID alice create a
keyring file, alice.keyring like:

[client.alice]
        key = AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA==

Using the mon IP addresses from the share’s export location, create a
configuration file, ceph.conf like:

[client]
        client quota = true
        mon host = 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789

Finally, mount the filesystem, substituting the filenames of the
keyring and configuration files you just created, and substituting the
path to be mounted from the share’s export location:

sudo ceph-fuse ~/mnt \
--id=alice \
--conf=./ceph.conf \
--keyring=./alice.keyring \
--client-mountpoint=/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c

Mounting
CephFS native share using Kernel client

If you have the ceph-common package installed in the
client host, you can use the kernel client to mount CephFS shares.

Important

If you choose to use the kernel client rather than the FUSE client
the share size limits set in manila may not be obeyed in versions of
kernel older than 4.17 and Ceph versions older than mimic. See the quota
limitations documentation
to understand CephFS quotas.

The mount command is as follows:

mount -t ceph {mon1 ip addr}:6789,{mon2 ip addr}:6789,{mon3 ip addr}:6789:/ \
    {mount-point} -o name={access-id},secret={access-key}

With our earlier examples, this would be:

mount -t ceph 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789:/ \
    /volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c \
    -o name=alice,secret='AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA=='

Mount CephFS NFS share
using NFS client

In the guest, mount the share using the NFS client and knowing the
share’s export location.

sudo mount -t nfs 172.24.4.3:/volumes/_nogroup/6732900b-32c1-4816-a529-4d6d3f15811e /mnt/nfs/

Known restrictions

  • A CephFS driver instance, represented as a backend driver section in
    manila.conf, requires a Ceph auth ID unique to the backend Ceph
    Filesystem. Using a non-unique Ceph auth ID will result in the driver
    unintentionally evicting other CephFS clients using the same Ceph auth
    ID to connect to the backend.
  • Snapshots are read-only. A user can read a snapshot’s contents from
    the .snap/{manila-snapshot-id}_{unknown-id} folder within
    the mounted share.

Security

  • Each share’s data is mapped to a distinct Ceph RADOS namespace. A
    guest is restricted to access only that particular RADOS namespace. https://docs.ceph.com/docs/nautilus/cephfs/file-layouts/

  • An additional level of resource isolation can be provided by
    mapping a share’s contents to a separate RADOS pool. This layout would
    be preferred only for cloud deployments with a limited number of shares
    needing strong resource separation. You can do this by setting a share
    type specification, cephfs:data_isolated for the share type
    used by the cephfs driver.

    manila type-key cephfstype set cephfs:data_isolated=True

Security with CephFS native share
backend

As the guests need direct access to Ceph’s public network, CephFS
native share backend is suitable only in private clouds where guests can
be trusted.

The manila.share.drivers.cephfs.driver Module

manila.share.drivers.cephfs.driver