Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,122 +1,233 @@
---
title: CephFS distributed filesystem
excerpt: 'This guide will show how to enable and manage CephFS on your CDA'
updated: 2021-05-31
excerpt: 'Learn how to create, manage, and mount a CephFS file system on OVHcloud using the API.'
updated: 2025-09-04
---

## What is CephFS?
## Objective

CephFS is a distributed POSIX filesystem built on top of Ceph. To use CephFS you need a client with support for it - all modern Linux distributions include CephFS driver by default.
"This guide provides detailed instructions on how to create, manage, and mount a CephFS file system for your OVHcloud services. It covers the full setup process using the OVHcloud API, ensuring you can efficiently integrate CephFS into your cloud environment.

You can enable and manage CephFS on your CDA through API. After enabling CephFS you can use in a similar manner to CDA - it's a private, dedicated filesystem for your own use. You can use both RBD and CephFS at the same time - just keep in mind that they share the same hardware underneath.
## Requirements

## Enabling CephFS
- A [Cloud Disk Array](/links/storage/cloud-disk-array) solution
- Access to the [OVHcloud Control Panel](/links/manager) or to the [OVHcloud API](/links/api)

Enabling and management is possible only through API. For now it's only possible to enable single filesystem which must be named ```fs-default```.
### What is CephFS?

First step is to list your current CephFS:
CephFS is a distributed POSIX-compliant file system built on top of Ceph. To use CephFS, you need a client that supports it—modern Linux distributions include the CephFS driver by default.

You can enable and manage CephFS on your Cloud Disk Array (CDA) via the OVHcloud API. Once enabled, CephFS functions like a private, dedicated file system for your use. You can use both RBD and CephFS simultaneously, but note that they share the same underlying hardware.

## Instructions

### Enabling CephFS

> [!primary]
>
> Enabling and managing CephFS is only possible through the OVHcloud API.
>
> If you are not familiar with the OVHcloud API, see our [First Steps with the OVHcloud API guide](/pages/manage_and_operate/api/first-steps).
>

Currently, only a single file system can be enabled, and it must be named fs-default.

The first step is to list your existing CephFS instances. Here, `serviceName` corresponds to the fsid of your cluster:

> [!api]
>
> @api {v1} /dedicated/ceph GET /dedicated/ceph/{serviceName}/cephfs
>

By default you'll get an empty list. Let's enable (create) a filesystem:
![api request 01](images/api_request_01.png)

By default, this request returns an empty list. To create your first file system, you need to enable it:

> [!api]
>
> @api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/cephfs/{fsName}/enable
> @api {v1} /dedicated/ceph GET /dedicated/ceph/{serviceName}/cephfs/{fsName}/enable
>

![api request 02](images/api_request_02.png)

Your CephFS should be available within a few minutes. You can verify its status directly on your cluster by running:

> [!primary]
>
> To access your Ceph cluster directly, please refer to [this guide](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_access_cluster)
>

Your CephFS should be available within few minutes. You can verify that directly on your cluster by using:
```bash
sudo ceph --id [USERID] fs ls
```

The result should look similar to:

```bash
ceph --id CEPH_USER fs ls
name: fs-default, metadata pool: cephfs.fs-default.meta, data pools: [cephfs.fs-default.data ]
```

If you would like to get some details about you CephFS, use:
If you want to retrieve more details about your CephFS, run:

```bash
ceph --id CEPH_USER fs get fs-default
sudo ceph --id [USERID] fs get fs-default
```

## Disabling and removing CephFS
The result should look similar to:

You can remove your filesystem when no longer needed. There are two steps here:
```bash
Filesystem 'fs-default' (1)
fs_name fs-default
epoch 16
flags 33 allow_snaps allow_multimds_snaps allow_standby_replay
created 2025-08-19T10:53:19.756188+0000
modified 2025-08-19T11:01:14.983793+0000
tableserver 0
root 0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
required_client_features {}
last_failure 0
last_failure_osd_epoch 926
compat compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in 0
up {}
failed 0
damaged
stopped
data_pools [5]
metadata_pool 6
inline_data disabled
balancer
standby_count_wanted 2
```

* disable your filesystem - this will block access to CephFS but your data will be intact, if you change your mind you can just enable it again
### Disabling and removing CephFS

When your file system is no longer needed, you can remove it in two steps:

- **Disable your file system** - This blocks access to CephFS, but your data remains intact. If needed, you can re-enable it later.

> [!api]
>
> @api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/cephfs/{fsName}/disable
>

* purge filesystem data - this can only be done on disabled filesystem

> [!api]
>
> @api {v1} /dedicated/ceph DELETE /dedicated/ceph/{serviceName}/cephfs/{fsName}
>
![api request 03](images/api_request_03.png)

## CephFS access management

To manage access to CephFS, the same set of IP ACL is used as for whole CDA. However, you'll need a user to write to a CephFS. Use the following API call to create a user:
- **Purge file system data** – This permanently deletes all data, and can only be done on a disabled file system.

> [!api]
>
> @api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/user
> @api {v1} /dedicated/ceph DELETE /dedicated/ceph/{serviceName}/cephfs/{fsName}
>

You'll also need to grant that user read and write access to CephFS data and metadata pools, called ```cephfs.fs-default.data``` and ```cephfs.fs-default.meta``` respectively. Do it using the following API call:
![api request 04](images/api_request_04.png)

> [!api]
### CephFS access management

To manage access to CephFS, the same IP ACL rules used for your CDA apply. However, to write to CephFS, you must create a dedicated user. You can follow [this guide](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_create_a_user).

Next, grant this user read and write access to the CephFS data and metadata pools:

- cephfs.fs-default.data
- cephfs.fs-default.meta

For details, see [the guide](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_change_user_rights).

> [!primary]
>
> @api {v1} /dedicated/ceph POST /dedicated/ceph/{serviceName}/user/{userName}/pool
> Required permissions: read and write on both the data and metadata pools.
>

Grants you need are `read`, `write`.
### Mounting CephFS on your host

## Mounting CephFS on your host
Install the **ceph-common** package, which provides the **/sbin/mount.ceph** binary. The package name may vary depending on your Linux distribution.

In the example below, we use a Debian-based system:

```bash
sudo apt install --no-install-recommends ceph-common
```

Next, configure your client to connect to the CDA cluster by editing (or creating) the **/etc/ceph/ceph.conf** file.

1. Create the Ceph configuration directory:

```bash
sudo mkdir -p /etc/ceph
```

Install the `ceph-common` package that contains `/sbin/mount.ceph` binary. The package name may be different for your disribution. In the example below we use a Debian-based one:
2. Create and edit the ceph.conf file:

```bash
apt install --no-install-recommends ceph-common
sudo nano /etc/ceph/ceph.conf
```

Now we need to point client to CDA cluster. Edit (or create) `/etc/ceph/ceph.conf`. It should look like below:
3. Add the [global] section with the public IP addresses of your monitors. You can find these IPs on the main page of your Cloud Disk Array in the OVHcloud Control Panel.

```bash
[global]
fsid = 12345678-dead-beef-cafe-0123456789ab
mon host = A.B.X.6 A.B.Y.6 A.B.Z.6
#Specific fsid
fsid = aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee

#Force use secure protocol
ms_client_mode = secure

#Force the messenger v2 protocol
ms_bind_msgr2 = true

# Use the PUBLIC IPs provided for your Cloud Disk Array
mon_host =A.B.X.Y:6789,A.B.X.Y:6789,A.B.X.Y:6789
```

FSID is the service name of your CDA. ```mon host``` IPs can be fetched using the following API call:
The FSID corresponds to the service name of your CDA. The monitor host IPs can be retrieved using the following API call:

> [!api]
>
> @api {v1} /dedicated/ceph GET /dedicated/ceph/{serviceName}
>

You will also need a second file with key for the user used to connect to the cluster. Fetch his key using:
![api request 05](images/api_request_05.png)

4. Save and close the file.

You will also need a second file containing the key for the user that connects to the cluster. Fetch the user key with the following API call:

> [!api]
>
> @api {v1} /dedicated/ceph GET /dedicated/ceph/{serviceName}/user/{userName}
>

Create a file called ```/etc/ceph/ceph.client.CEPHFS_USER.keyring``` with following content:
![api request 06](images/api_request_06.png)

Then, create a secret file for this user:

1. Create a file called /etc/ceph/[USERID].secret

```bash
[client.CEPHFS_USER]
key = AQBm7o8fhns1HBAAgaLzICzJfjgU/U2lkVy+zA==
sudo nano /etc/ceph/[USERID].secret
```

2. Add the user key to the file in the correct format:

```bash
YOUR_SECRET_KEY_FOR_USER
```

3. Set strict permissions on the secret file to ensure security:

```bash
sudo chmod 600 /etc/ceph/[USERID].secret
```

Finally you can mount your filesystem:

```bash
mkdir /mnt/cephfs
mount -t ceph -o name=CEPHFS_USER :/ /mnt/cephfs/
mount -t ceph -o name=[USERID],secretfile=/etc/ceph/[USERID].secret :/ /mnt/cephfs/
```

## Go further
Expand Down
Loading