Skip to main content
Connect file storage to a Managed Kubernetes cluster in another pool

Connect file storage to a Managed Kubernetes cluster in another pool

If you plan to use file storage to store backups, we recommend creating the Managed Kubernetes storage and cluster in pools from different availability zones or regions to improve fault tolerance. If the file storage and the Managed Kubernetes cluster are in different pools, you must configure private network connectivity at the L3 level via global router to connect the storage.

  1. Create global router.
  2. Connect network and subnet for Managed Kubernetes cluster to global router.
  3. Connect the network and subnet for file storage to the global router.
  4. Assign an IP address to the Managed Kubernetes cluster node.
  5. Write routes on the Managed Kubernetes cluster node. You can only add routes through technical support.
  6. Create file storage.
  7. Mount file storage to Managed Kubernetes cluster.

See example of connecting file storage to a Managed Kubernetes cluster in another pool.

If more disk space is needed with file storage, we recommend creating the storage in the same pool as the Managed Kubernetes cluster. More details in the Connect file storage to Managed Kubernetes cluster-in-one-pool instructions.

Example of connecting file storage to a Managed Kubernetes cluster

For example, you need to connect file storage in pool ru-2 to a Managed Kubernetes cluster in pool ru-8.

  1. Create a global router.
  2. Connect two private networks to the global router — 192.168.0.0.0/29 with a gateway of 192.168.0.1 for the ru-8 pool and 172.16.0.0.0/29 with a gateway of 172.16.0.1 for the ru-2 pool.
  3. Assign an address from the 192.168.0.0.0/29 subnet to a Managed Kubernetes cluster node, such as 192.168.0.2.
  4. Write a route on the Managed Kubernetes cluster node in the ru-8 pool — to the 172.16.0.0.0/29 subnet via the 192.168.0.1 gateway.
  5. Create a file store on the 172.16.0.0.0/29 subnet.
  6. Mount the file storage to the Managed Kubernetes cluster.

Create a global router

  1. In Control Panel, go to Network ServicesServercore Global Router.
  2. Click Create Router. Each account has a limit of five global routers.
  3. Enter the name of the router.
  4. Press Create.
  5. If the router was created with status ERROR or hung in one of the statuses, create a ticket.

Connect the network and subnet to the router for the Managed Kubernetes cluster

For your information

If the cloud platform network is connected to a global router, you can only manage it on the global router page.

You need to create a global router network and subnet to that project and cloud platform pool where the Managed Kubernetes cluster is created.

You can connect a new network to the router or an existing network if it is not already connected to any of the account's global routers.

  1. In Control Panel, go to Network ServicesServercore Global Router.

  2. Open the router page → Networks tab.

  3. Click Create Network.

  4. Enter a network name, this will only be used in the control panel.

  5. Select the Cloud Platform service.

  6. Select the pool in which the Managed Kubernetes cluster is created.

  7. Select the project in which the Managed Kubernetes cluster is created.

  8. Enter the subnet name — this will only be used in the control panel.

  9. Enter the CIDR — IP address and subnet mask. The subnetwork must meet the conditions:

    • belong to the RFC 1918 private address range of 10.0.0.0.0/8, 172.16.0.0.0/12, or 192.168.0.0.0/16;
    • have a size of at least /29, as three addresses will be occupied by Servercore network equipment;
    • Do not overlap with other subnets added to this router: The IP addresses of each subnet on the router must not overlap with the IP addresses of other subnets on the router;
    • if Managed Kubernetes nodes will be included in the global router network, the subnet must not overlap with the 10.250.0.0.0/16, 10.10.0.0.0/16, and 10.96.0.0.0/12 ranges. These subnets participate in the internal addressing of Managed Kubernetes, their use can cause conflicts in the global router network.
  10. Enter the gateway IP or leave the first address from the subnet assigned by default. Do not assign this address to your devices to avoid disrupting your network.

  11. Enter service IPs or leave the last addresses from the subnet assigned by default. Do not assign these addresses to your devices to avoid disrupting your network.

  12. Click Create Network.

  13. Optional: check the network topology on the global router. In Control Panel, go to Network ServicesServercore Global Router. Open the page of the desired router and click Network Map.

Connect a network and subnet to the router for file storage

For your information

If the cloud platform network is connected to a global router, you can only manage it on the global router page.

You need to create a global router network and subnet to that project and cloud platform pool where the file storage will be created in the future.

You can connect a new network to the router or an existing network if it is not already connected to any of the account's global routers.

  1. In Control Panel, go to Network ServicesServercore Global Router.

  2. Open the router page → Networks tab.

  3. Click Create Network.

  4. Enter a network name, this will only be used in the control panel.

  5. Select the Cloud Platform service.

  6. Select pool where the file storage will be created.

  7. Select project where the file storage will be created.

  8. Enter the subnet name — this will only be used in the control panel.

  9. Enter the CIDR — IP address and subnet mask. The subnetwork must meet the conditions:

    • belong to the RFC 1918 private address range of 10.0.0.0.0/8, 172.16.0.0.0/12, or 192.168.0.0.0/16;
    • have a size of at least /29, as three addresses will be occupied by Servercore network equipment;
    • Do not overlap with other subnets added to this router: The IP addresses of each subnet on the router must not overlap with the IP addresses of other subnets on the router;
    • if Managed Kubernetes nodes will be included in the global router network, the subnet must not overlap with the 10.250.0.0.0/16, 10.10.0.0.0/16, and 10.96.0.0.0/12 ranges. These subnets participate in the internal addressing of Managed Kubernetes, their use can cause conflicts in the global router network.
  10. Enter the gateway IP or leave the first address from the subnet assigned by default. Do not assign this address to your devices to avoid disrupting your network.

  11. Enter service IPs or leave the last addresses from the subnet assigned by default. Do not assign these addresses to your devices to avoid disrupting your network.

  12. Click Create Network.

  13. Optional: check the network topology on the global router. In Control Panel, go to Network ServicesServercore Global Router. Open the page of the desired router and click Network Map.

Assign an IP address to a Managed Kubernetes cluster node

Configure a local port on the Managed Kubernetes cluster node that is included in the global router network. On the port, assign an IP address from the subnet you created on the global router for the corresponding pool.

  1. Add a Managed Kubernetes cluster node to the created global router subnet. If you don't already have a Managed Kubernetes cluster, create one. When creating, select the subnet of the global router as the subnet.

  2. Apply the changes depending on the Apply Changes parameter in the Port Setup block. The value of the parameter can be viewed in Control Panel under Cloud PlatformServers → Cloud Server page:

    • When rebooting the server — programmatically reboot the node or manually make changes to the network configuration file on the node;
    • Manually in the network configuration file on the server — Manually make changes to the network configuration file on the node.

Write routes on Managed Kubernetes cluster node

If you have created a new Managed Kubernetes cluster and added a node to an existing global router network, no routes need to be written. In this case, the node will be immediately available to other devices on the network.

If you are adding an existing node to the global router network, it must have static routes to all subnets with which you want connectivity. To do so, create a ticket.

Create file storage

  1. In Control Panel, go to Cloud PlatformFile Storage.

  2. Click Create Storage.

  3. Enter a new storage name or leave the name that is automatically created.

  4. Select the pool where the storage will be located.

  5. Select the subnet of the Servercore Global Router private network that you connected to the router for file storage.

  6. Select file storage type. Storages differ in read/write speeds and bandwidth values:

    • HDD Basic;

    • SSD Universal;

    • SSD Fast.

      Once created, the storage type cannot be changed.

  7. Specify the storage size: from 50 GB to 50 TB. Once created, you can increase file-storage, but you can't decrease it.

  8. Select a protocol:

    • NFSv4 — for connecting storage to servers running Linux and other Unix systems;

    • CIFS SMBv3 — for connecting the storage to Windows servers.

      Once created, the protocol cannot be changed.

  9. Check out the cost of file storage.

  10. Press Create.

Mount file storage to a Managed Kubernetes cluster

The mounting process depends on the file storage protocol: mount storage using NFSv4 protocol or CIFS SMBv3.

Mount storage using NFSv4 protocol

  1. Create PersistentVolume.
  2. Create PersistentVolumeClaim.
  3. Add file storage to container.

Create PersistentVolume

  1. Connect to Managed Kubernetes cluster.

  2. Create a yaml file filestorage_persistent_volume.yaml with a manifest for PersistentVolume:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: pv_name
    spec:
    storageClassName: storageclass_name
    capacity:
    storage: <storage_size>
    accessModes:
    - ReadWriteMany
    nfs:
    path: /shares/share-<mountpoint_uuid>
    server: <filestorage_ip_address>

    Specify:

    • <storage_size> is the size of the file storage in GB (PersistentVolume size), for example, 100 Gi. The limit is from 50 GB to 50 TB;
    • <mountpoint_uuid> — mount point ID. You can look in Control Panel under Cloud PlatformFile Storage → Storage page → Connectivity block → GNU/Linux tab;
    • <filestorage_ip_address> — IP address of the file storage. You can look in control panel under Cloud PlatformFile Storage → Storage page → Settings tabIP field.
  3. Create PersistentVolume — apply the manifest:

    kubectl apply -f filestorage_persistent_volume.yaml
  4. Verify that PersistentVolume has been created:

    kubectl get pv

Create PersistentVolumeClaim

  1. Create a yaml file filestorage_persistent_volume_claim.yaml with a manifest for PersistentVolumeClaim:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc_name
    spec:
    storageClassName: storageclass_name
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: <storage_size>

    Specify <storage_size> — the file storage size in GB (PersistentVolume size), for example, 100 Gi. The limit is from 50 GB to 50 TB.

  2. Create PersistentVolumeClaim — apply the manifest:

    kubectl apply -f filestorage_persistent_volume_claim.yaml
  3. Check that PersistentVolumeClaim has been created:

    kubectl get pvc

Add storage to container

  1. Create a yaml file deployment.yaml with the manifest for Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: filestorage_deployment_name
    labels:
    project: filestorage_deployment_name
    spec:
    replicas: 2
    selector:
    matchLabels:
    project: filestorage_project_name
    template:
    metadata:
    labels:
    project: filestorage_project_name
    spec:
    volumes:
    - name: volume_name
    persistentVolumeClaim:
    claimName: pvc_name
    containers:
    - name: container-nginx
    image: nginx:stable-alpine
    ports:
    - containerPort: 80
    name: "http-server"
    volumeMounts:
    - name: volume_name
    mountPath: <mouth_path>

    Specify <mouth_path> — the path to the folder inside the container to which the file storage will be mounted.

  2. Create Deployment — apply the manifest:

    kubectl apply -f deployment.yaml

Mount storage using CIFS SMBv3 protocol

  1. Install the CSI driver for Samba.
  2. Create a secret to store login and password.
  3. Create StorageClass.
  4. Create PersistentVolumeClaim.
  5. Add file storage to container.

Install CSI driver for Samba

  1. Connect to Managed Kubernetes cluster.

  2. Install Helm package manager.

  3. Download the CSI driver from GitHub Kubernetes CSI.

  4. Install the latest driver version:

    helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
    helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.4.0
  5. Check that the pods are installed and running:

    kubectl --namespace=kube-system get pods --selector="app=csi-smb-controller"

Create a secret

File storage does not support access rights differentiation. CIFS SMBv3 access is performed under the guest user.

Create a secret to store the login and password (default is guest/guest):

kubectl create secret generic smbcreds --from-literal username=guest --from-literal password=guest

Create StorageClass

  1. Create a filestorage_storage_storage_class.yaml file with a manifest for StorageClass:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: storageclass_name
    provisioner: smb.csi.k8s.io
    parameters:
    source: "//<filestorage_ip_address>/share-<mountpoint_uuid>"
    csi.storage.k8s.io/provisioner-secret-name: "smbcreds"
    csi.storage.k8s.io/provisioner-secret-namespace: "default"
    csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
    csi.storage.k8s.io/node-stage-secret-namespace: "default"
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    mountOptions:
    - dir_mode=0777
    - file_mode=0777

    Specify:

    • <mountpoint_uuid> — mount point ID. You can look in Control Panel under Cloud PlatformFile Storage → Storage page → Connectivity block → GNU/Linux tab;
    • <filestorage_ip_address> — IP address of the file storage. You can look in control panel under Cloud PlatformFile Storage → Storage page → Settings tabIP field.
  2. Create StorageClass — apply the manifest:

    kubectl apply -f filestorage_storage_class.yaml
  3. Verify that the StorageClass has been created:

    kubectl get storageclass

Create PersistentVolumeClaim

  1. Create a yaml file filestorage_persistent_volume_claim.yaml with a manifest for PersistentVolumeClaim:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc_name
    annotations:
    volume.beta.kubernetes.io/storage-class: smb
    spec:
    accessModes: ["ReadWriteMany"]
    resources:
    requests:
    storage: <storage_size>

    Specify <storage_size> — the file storage size in GB (PersistentVolume size), for example, 100 Gi. The limit is from 50 GB to 50 TB.

  2. Create PersistentVolumeClaim — apply the manifest:

    kubectl apply -f filestorage_persistent_volume_claim.yaml
  3. Check that PersistentVolumeClaim has been created:

    kubectl get pvc

Add storage to container

  1. Create a yaml file deployment.yaml with the manifest for Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: filestorage_deployment_name
    labels:
    project: filestorage_deployment_name
    spec:
    replicas: 2
    selector:
    matchLabels:
    project: filestorage_project_name
    template:
    metadata:
    labels:
    project: filestorage_project_name
    spec:
    volumes:
    - name: volume_name
    persistentVolumeClaim:
    claimName: pvc_name
    containers:
    - name: container-nginx
    image: nginx:stable-alpine
    ports:
    - containerPort: 80
    name: "http-server"
    volumeMounts:
    - name: volume_name
    mountPath: <mouth_path>

    Specify <mouth_path> — the path to the folder inside the container to which the file storage will be mounted.

  2. Create Deployment — apply the manifest:

    kubectl apply -f deployment.yaml