Skip to main content
Connect file storage to a Managed Kubernetes cluster in a single pool
Last update:

Connect file storage to a Managed Kubernetes cluster in a single pool

If you need to increase disk space with file storage, we recommend creating storage in a single pool with the Managed Kubernetes cluster. If the file storage and the Managed Kubernetes cluster are in the same pool, you must mount the storage to connect it.

  1. Create file storage.
  2. Mount the file storage to the Managed Kubernetes cluster.

If you plan to use file storage to store backups, we recommend creating the storage and Managed Kubernetes cluster in pools from different availability zones or regions to improve fault tolerance. Read more in the instructions Connect file storage to a Managed Kubernetes cluster in another pool.

Create file storage

  1. In control panel go to Cloud platformFile storage.

  2. Click Create storage.

  3. Enter a new storage name or leave the name that is automatically created.

  4. Select region and pool segment in which the cluster is Managed Kubernetes.

  5. Select the cloud private subnet where the storage will be located. We recommend choosing the subnet where the nodes of the Managed Kubernetes cluster are located so that the network connectivity between the nodes and the storage is automatically configured. Once the storage is created, the subnet cannot be changed.

  6. Enter a private IP address for the vault or leave the first available address from the subnet assigned by default. Once the storage is created, the IP address cannot be changed.

  7. Select file storage type:

    • HDD Basic;
    • SSD Universal;
    • SSD Fast.

    File storage types differ in bandwidth values and number of read and write operations, see the table for details File storage limits.

    Once created, the storage type cannot be changed.

  8. Specify the storage size: from 50 GB to 50 TB. Once created, you can expand file storage but you can't reduce it.

  9. Select a protocol:

    • NFSv4 — for connecting storage to servers running Linux and other Unix systems;
    • CIFS SMBv3 — for connecting the storage to Windows servers.

    Once the repository is created, the protocol cannot be changed.

  10. Configure the file storage access rules:

    • available to all — the storage will be available to any IP address of the private subnet in which it is created;
    • access restricted — the storage will be available only to specific IP addresses or private subnets. If you create a file storage without rules, access will be restricted to all IP addresses. To open access, click Add rule, enter the IP address or CIDR of the private subnet, select access level (NFSv4 protocol only) and enter a comment. To add additional rules, click Add rule.

    After creating the storage you can change the access rules, for this purpose you can configure new access rules.

  11. Check out the price of file storage.

  12. Click Create.

Mount the file storage to the Managed Kubernetes cluster

The mount process depends on the file storage protocol: NFSv4 or CIFS SMBv3.

  1. Create PersistentVolume.
  2. Create PersistentVolumeClaim.
  3. Add file storage to the container.

Create PersistentVolume

  1. Connect to a Managed Kubernetes cluster.

  2. Create a yaml file with a manifest for the PersistentVolume object:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: pv_name
    spec:
    storageClassName: storageclass_name
    capacity:
    storage: <storage_size>
    accessModes:
    - ReadWriteMany
    nfs:
    path: /shares/share-<mountpoint_uuid>
    server: <filestorage_ip_address>

    Specify:

    • <storage_size> — PersistentVolume size in GB (file storage size), e.g. 100 Gi. The limit is from 50 GB to 50 TB;
    • <mountpoint_uuid> — The ID of the mount point. You can look in control panels under Cloud platformFile storage → storage page → block Connection → tab GNU/Linux;
    • <filestorage_ip_address> — The IP address of the file storage. You can look in control panels under Cloud platformFile storage → storage page → tab Settings → field IP.
  3. Apply the manifest:

    kubectl apply -f <persistent_volume.yaml>

    Specify <persistent_volume.yaml> — the name of the yaml file with the manifest to create the PersistentVolume.

  4. Make sure that a PersistentVolume object is created:

    kubectl get pv

Create a PersistentVolumeClaim

  1. Create a yaml file with a manifest for the PersistentVolumeClaim object:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc_name
    spec:
    storageClassName: storageclass_name
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: <storage_size>

    Specify <storage_size> — PersistentVolume size in GB (file storage size), e.g. 100 Gi. The limit is from 50 GB to 50 TB.

  2. Apply the manifest:

    kubectl apply -f <persistent_volume_claim.yaml>

    Specify <persistent_volume_claim.yaml> — the name of the yaml file with the manifest to create the PersistentVolumeClaim.

  3. Ensure that a PersistentVolumeClaim object is created:

    kubectl get pvc

Add storage to a container

  1. Create a yaml file with a manifest for the Deployment object:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: filestorage_deployment_name
    labels:
    project: filestorage_deployment_name
    spec:
    replicas: 2
    selector:
    matchLabels:
    project: filestorage_project_name
    template:
    metadata:
    labels:
    project: filestorage_project_name
    spec:
    volumes:
    - name: volume_name
    persistentVolumeClaim:
    claimName: pvc_name
    containers:
    - name: container-nginx
    image: nginx:stable-alpine
    ports:
    - containerPort: 80
    name: "http-server"
    volumeMounts:
    - name: volume_name
    mountPath: <mount_path>

    Specify <mount_path> — path to the folder inside the container to which the file storage will be mounted.

  2. Apply the manifest:

    kubectl apply -f <deployment.yaml>

    Specify <deployment.yaml> — the name of the yaml file with the manifest to create the Deployment.