Connect file storage to a Managed Kubernetes cluster in a single pool
If you need to increase disk space with file storage, we recommend creating storage in a single bullet with the Managed Kubernetes cluster. If the file storage and the Managed Kubernetes cluster are in the same pool, you must mount the storage to connect it.
If you plan to use file storage to store backups, we recommend creating the Managed Kubernetes storage and cluster in pools from different availability zones or regions to improve fault tolerance. Read more in the instructions Connect file storage to a Managed Kubernetes cluster in another pool.
1. Create file storage
Control panel
Terraform
-
В control panels from the top menu, press Products and select File storage.
-
Click Create storage.
-
Enter a new storage name or leave the name that is automatically created.
-
Select region and pool segment where the storage will be created.
If you need to increase disk space with file storage, select a pool segment from the pool that hosts the cloud server or Managed Kubernetes cluster.
If you plan to use storage to store backups, we recommend selecting a pool segment from a different availability zone or region to improve fault tolerance.
-
Fill in the blocks:
-
Check out the price of file storage.
-
Click Create.
Subnetwork
-
Select the private subnet where the storage will be located. The type of subnet depends on what you want to connect the storage to:
- cloud private subnet — the storage will be available to Managed Kubernetes cloud servers and clusters only in the pool you selected in the previous step. To connect the storage you will only need to mount it;
- global router subnet — the storage will be available for dedicated servers, as well as cloud servers and Managed Kubernetes clusters that are located in other pools. To connect the storage, you need to configure network connectivity between the server or cluster and the storage through the global router. See examples of how to configure network connectivity in the instructions in the section Connect file storage.
Once the repository is created, the subnet cannot be changed.
-
Enter a private IP address for the vault or leave the first available address from the subnet assigned by default. Once the storage is created, the IP address cannot be changed.
Settings
-
Select file storage type:
- HDD Basic
- SSD Universal
- SSD Fast
Once created, the storage type cannot be changed.
-
Specify the storage size: from 50 GB to 50 TB. Once created, you can expand file storage but you can't reduce it.
-
Select a protocol:
- NFSv4 — for connecting storage to servers running Linux and other Unix systems;
- CIFS SMBv3 — for connecting the storage to Windows servers.
Once the repository is created, the protocol cannot be changed.
Access rules
NFSv4
CIFS SMBv3
-
Configure the file storage access rules:
- available to all — the storage will be available to any IP address of the private subnet in which it is created;
- access restricted — the storage will be available only to specific IP addresses or private subnets. If you create a file storage without rules, access will be restricted to all IP addresses.
-
If you have selected the option Access is restricted, press Add rule.
-
Enter the IP address or CIDR of the private subnet, select the access level.
After creating the storage you can change the access rules, for this purpose you can configure new access rules.
-
Configure the file storage access rules:
- available to all — the storage will be available to any IP address of the private subnet in which it is created;
- access restricted — the storage will be available only to specific IP addresses or private subnets. If you create a file storage without rules, access will be restricted to all IP addresses.
-
If you have selected the option Access is restricted, press Add rule.
-
Enter the IP address or CIDR of the private subnet.
After creating the storage you can change the access rules, for this purpose you can configure new access rules.
Use the instructions Create file storage in the Terraform documentation.
2. Mount the file storage to the Managed Kubernetes cluster
The mount process depends on the file storage protocol: NFSv4 or CIFS SMBv3.
NFSv4
CIFS SMBv3
1. Create PersistentVolume
-
Create a yaml file with a manifest for the PersistentVolume object:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv_name
spec:
storageClassName: storageclass_name
capacity:
storage: <storage_size>
accessModes:
- ReadWriteMany
nfs:
path: /shares/share-<mountpoint_uuid>
server: <filestorage_ip_address>Specify:
<storage_size>
— PersistentVolume size in GB (file storage size), e.g.100 Gi
. The limit is from 50 GB to 50 TB;<mountpoint_uuid>
— The ID of the mount point. You can look in control panels: from the top menu, press Products → File storage → storage page → block Connection → tab GNU/Linux;<filestorage_ip_address>
— The IP address of the file storage. You can look in control panels: from the top menu, press Products → File storage → storage page → tab Settings → field IP.
-
Apply the manifest:
kubectl apply -f <persistent_volume.yaml>
Specify
<persistent_volume.yaml>
— the name of the yaml file with the manifest to create the PersistentVolume. -
Make sure that a PersistentVolume object is created:
kubectl get pv
2. Create a PersistentVolumeClaim
-
Create a yaml file with a manifest for the PersistentVolumeClaim object:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc_name
spec:
storageClassName: storageclass_name
accessModes:
- ReadWriteMany
resources:
requests:
storage: <storage_size>Specify
<storage_size>
— PersistentVolume (file storage) size in GB, e.g.100 Gi
. The limit is from 50 GB to 50 TB. -
Apply the manifest:
kubectl apply -f <persistent_volume_claim.yaml>
Specify
<persistent_volume_claim.yaml>
— the name of the yaml file with the manifest to create the PersistentVolumeClaim. -
Ensure that a PersistentVolumeClaim object is created:
kubectl get pvc
3. Add storage to a container
-
Create a yaml file with a manifest for the Deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: filestorage_deployment_name
labels:
project: filestorage_deployment_name
spec:
replicas: 2
selector:
matchLabels:
project: filestorage_project_name
template:
metadata:
labels:
project: filestorage_project_name
spec:
volumes:
- name: volume_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: container-nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- name: volume_name
mountPath: <mount_path>Specify
<mount_path>
— path to the folder inside the container to which the file storage will be mounted. -
Apply the manifest:
kubectl apply -f <deployment.yaml>
Specify
<deployment.yaml>
— the name of the yaml file with the manifest to create the Deployment.
- Install the CSI driver for Samba.
- Create a secret to store the login and password.
- Create StorageClass.
- Create PersistentVolumeClaim.
- Add file storage to the container.
1. Install the CSI driver for Samba
-
Download the CSI driver from GitHub Kubernetes CSI.
-
Install the latest driver version:
helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.4.0 -
Check that the pods are installed and running:
kubectl --namespace=kube-system get pods --selector="app=csi-smb-controller"
2. Create a secret
The file storage does not support differentiation of access rights. Access via CIFS SMBv3 protocol is performed under the user name guest
.
Create a secret to store the login and password (by default guest/guest
):
kubectl create secret generic smbcreds --from-literal username=guest --from-literal password=guest
3. Create StorageClass
-
Create a yaml file with a manifest for the StorageClass object:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storageclass_name
provisioner: smb.csi.k8s.io
parameters:
source: "//<filestorage_ip_address>/share-<mountpoint_uuid>"
csi.storage.k8s.io/provisioner-secret-name: "smbcreds"
csi.storage.k8s.io/provisioner-secret-namespace: "default"
csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
csi.storage.k8s.io/node-stage-secret-namespace: "default"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- dir_mode=0777
- file_mode=0777Specify:
<mountpoint_uuid>
— The ID of the mount point. You can look in control panels: from the top menu, press Products → File storage → storage page → block Connection → tab GNU/Linux;<filestorage_ip_address>
— The IP address of the file storage. You can look in control panels: from the top menu, press Products → File storage → storage page → tab Settings → field IP.
-
Apply the manifest:
kubectl apply -f <storage_class.yaml>
Specify
<storage_class.yaml>
— name of the yaml file with the manifest to create the StorageClass. -
Make sure that the StorageClass object is created:
kubectl get storageclass
4. Create a PersistentVolumeClaim
-
Create a yaml file with a manifest for the PersistentVolumeClaim object:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc_name
annotations:
volume.beta.kubernetes.io/storage-class: smb
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: <storage_size>Specify
<storage_size>
— PersistentVolume (file storage) size in GB, e.g.100 Gi
. The limit is from 50 GB to 50 TB. -
Apply the manifest:
kubectl apply -f <persistent_volume_claim.yaml>
Specify
<persistent_volume_claim.yaml>
— the name of the yaml file with the manifest to create the PersistentVolumeClaim. -
Ensure that the PersistentVolumeClaim object is created:
kubectl get pvc
5. Add storage to a container
-
Create a yaml file with a manifest for the Deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: filestorage_deployment_name
labels:
project: filestorage_deployment_name
spec:
replicas: 2
selector:
matchLabels:
project: filestorage_project_name
template:
metadata:
labels:
project: filestorage_project_name
spec:
volumes:
- name: volume_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: container-nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- name: volume_name
mountPath: <mount_path>Specify
<mount_path>
— path to the folder inside the container to which the file storage will be mounted. -
Apply the manifest:
kubectl apply -f <deployment.yaml>
Specify
<deployment.yaml>
— the name of the yaml file with the manifest to create the Deployment.