Connect file storage to a Managed Kubernetes cluster in another pool
If you plan to use file storage to store backups, we recommend creating the Managed Kubernetes storage and cluster in pools from different availability zones or regions to improve fault tolerance. If the file storage and the Managed Kubernetes cluster are in different pools, you must configure private network connectivity at the L3 level via global router to connect the storage.
- Create global router.
- Connect network and subnet for Managed Kubernetes cluster to global router.
- Connect the network and subnet for file storage to the global router.
- Assign an IP address to the Managed Kubernetes cluster node.
- Write routes on the Managed Kubernetes cluster node. You can only add routes through technical support.
- Create file storage.
- Mount file storage to Managed Kubernetes cluster.
See example of connecting file storage to a Managed Kubernetes cluster in another pool.
If more disk space is needed with file storage, we recommend creating the storage in the same pool as the Managed Kubernetes cluster. More details in the Connect file storage to Managed Kubernetes cluster-in-one-pool instructions.
Example of connecting file storage to a Managed Kubernetes cluster
For example, you need to connect file storage in pool ru-2 to a Managed Kubernetes cluster in pool ru-8.
- Create a global router.
- Connect two private networks to the global router —
192.168.0.0.0/29
with a gateway of192.168.0.1
for the ru-8 pool and172.16.0.0.0/29
with a gateway of172.16.0.1
for the ru-2 pool. - Assign an address from the
192.168.0.0.0/29
subnet to a Managed Kubernetes cluster node, such as192.168.0.2
. - Write a route on the Managed Kubernetes cluster node in the ru-8 pool — to the
172.16.0.0.0/29
subnet via the192.168.0.1
gateway. - Create a file store on the
172.16.0.0.0/29
subnet. - Mount the file storage to the Managed Kubernetes cluster.
Create a global router
- In Control Panel, go to Network Services → Servercore Global Router.
- Click Create Router. Each account has a limit of five global routers.
- Enter the name of the router.
- Press Create.
- If the router was created with status ERROR or hung in one of the statuses, create a ticket.
Connect the network and subnet to the router for the Managed Kubernetes cluster
If the cloud platform network is connected to a global router, you can only manage it on the global router page.
You need to create a global router network and subnet to that project and cloud platform pool where the Managed Kubernetes cluster is created.
You can connect a new network to the router or an existing network if it is not already connected to any of the account's global routers.
- Подключить новую сеть
- Подключить существующую сеть
-
In Control Panel, go to Network Services → Servercore Global Router.
-
Open the router page → Networks tab.
-
Click Create Network.
-
Enter a network name, this will only be used in the control panel.
-
Select the Cloud Platform service.
-
Select the pool in which the Managed Kubernetes cluster is created.
-
Select the project in which the Managed Kubernetes cluster is created.
-
Enter the subnet name — this will only be used in the control panel.
-
Enter the CIDR — IP address and subnet mask. The subnetwork must meet the conditions:
- belong to the RFC 1918 private address range of
10.0.0.0.0/8
,172.16.0.0.0/12
, or192.168.0.0.0/16
; - have a size of at least /29, as three addresses will be occupied by Servercore network equipment;
- Do not overlap with other subnets added to this router: The IP addresses of each subnet on the router must not overlap with the IP addresses of other subnets on the router;
- if Managed Kubernetes nodes will be included in the global router network, the subnet must not overlap with the
10.250.0.0.0/16
,10.10.0.0.0/16
, and10.96.0.0.0/12
ranges. These subnets participate in the internal addressing of Managed Kubernetes, their use can cause conflicts in the global router network.
- belong to the RFC 1918 private address range of
-
Enter the gateway IP or leave the first address from the subnet assigned by default. Do not assign this address to your devices to avoid disrupting your network.
-
Enter service IPs or leave the last addresses from the subnet assigned by default. Do not assign these addresses to your devices to avoid disrupting your network.
-
Click Create Network.
-
Optional: check the network topology on the global router. In Control Panel, go to Network Services → Servercore Global Router. Open the page of the desired router and click Network Map.
-
Check that the network has not yet been added to any of the account's global routers — in dashboard under Cloud Platform → Network → Private Networks tab it does not have the Global Router tag.
-
Verify that the subnet meets the conditions:
- belong to the RFC 1918 private address range of
10.0.0.0.0/8
,172.16.0.0.0/12
, or192.168.0.0.0/16
; - have a size of at least /29, as three addresses will be occupied by Servercore network equipment;
- Do not overlap with other subnets added to this router: The IP addresses of each subnet on the router must not overlap with the IP addresses of other subnets on the router;
- if Managed Kubernetes nodes will be included in the global router network, the subnet must not overlap with the
10.250.0.0.0/16
,10.10.0.0.0/16
, and10.96.0.0.0/12
ranges. These subnets participate in the internal addressing of Managed Kubernetes, their use can cause conflicts in the global router network.
- belong to the RFC 1918 private address range of
-
In Control Panel, go to Cloud Platform → Network.
-
Open the Private Networks tab.
-
From the menu ( ) of the network, select Connect to Global Router.
-
Select the global router.
-
For each of the network subnets, enter the IP address to be assigned to the router, or leave the first available address from the subnet assigned by default. Do not assign this address to your devices to avoid disrupting your network. The last two free subnet addresses will be reserved as service addresses.
-
Press Connect. Do not close the window until you see the message that the network is connected. After that, in the control panel:
Connect a network and subnet to the router for file storage
If the cloud platform network is connected to a global router, you can only manage it on the global router page.
You need to create a global router network and subnet to that project and cloud platform pool where the file storage will be created in the future.
You can connect a new network to the router or an existing network if it is not already connected to any of the account's global routers.
- Подключить новую сеть
- Подключить существующую сеть
-
In Control Panel, go to Network Services → Servercore Global Router.
-
Open the router page → Networks tab.
-
Click Create Network.
-
Enter a network name, this will only be used in the control panel.
-
Select the Cloud Platform service.
-
Select pool where the file storage will be created.
-
Select project where the file storage will be created.
-
Enter the subnet name — this will only be used in the control panel.
-
Enter the CIDR — IP address and subnet mask. The subnetwork must meet the conditions:
- belong to the RFC 1918 private address range of
10.0.0.0.0/8
,172.16.0.0.0/12
, or192.168.0.0.0/16
; - have a size of at least /29, as three addresses will be occupied by Servercore network equipment;
- Do not overlap with other subnets added to this router: The IP addresses of each subnet on the router must not overlap with the IP addresses of other subnets on the router;
- if Managed Kubernetes nodes will be included in the global router network, the subnet must not overlap with the
10.250.0.0.0/16
,10.10.0.0.0/16
, and10.96.0.0.0/12
ranges. These subnets participate in the internal addressing of Managed Kubernetes, their use can cause conflicts in the global router network.
- belong to the RFC 1918 private address range of
-
Enter the gateway IP or leave the first address from the subnet assigned by default. Do not assign this address to your devices to avoid disrupting your network.
-
Enter service IPs or leave the last addresses from the subnet assigned by default. Do not assign these addresses to your devices to avoid disrupting your network.
-
Click Create Network.
-
Optional: check the network topology on the global router. In Control Panel, go to Network Services → Servercore Global Router. Open the page of the desired router and click Network Map.
-
Check that the network has not yet been added to any of the account's global routers — in dashboard under Cloud Platform → Network → Private Networks tab it does not have the Global Router tag.
-
Verify that the subnet meets the conditions:
- belongs to the RFC 1918 private address range of
10.0.0.0.0/8
,172.16.0.0.0/12
, or192.168.0.0.0/16
; - is at least /29, as three addresses will be occupied by Servercore network equipment;
- does not overlap with other subnets added to this router: the IP addresses of each subnet on the router must not overlap with the IP addresses of other subnets on the router;
- if Managed Kubernetes nodes will be included in the global router network, the subnet must not overlap with the
10.250.0.0.0/16
,10.10.0.0.0/16
, and10.96.0.0.0/12
ranges. These subnets participate in the internal addressing of Managed Kubernetes, their use can cause conflicts in the global router network.
- belongs to the RFC 1918 private address range of
-
In Control Panel, go to Cloud Platform → Network.
-
Open the Private Networks tab.
-
From the menu ( ) of the network, select Connect to Global Router.
-
Select the global router.
-
For each of the subnets, enter the gateway IP or leave the first available address from the subnet assigned by default. Do not assign this address to your devices to avoid disrupting your network. The last two free subnet addresses will be reserved as service addresses.
-
Press Connect. Do not close the window until you see the message that the network is connected. After that, in the control panel:
Assign an IP address to a Managed Kubernetes cluster node
Configure a local port on the Managed Kubernetes cluster node that is included in the global router network. On the port, assign an IP address from the subnet you created on the global router for the corresponding pool.
-
Add a Managed Kubernetes cluster node to the created global router subnet. If you don't already have a Managed Kubernetes cluster, create one. When creating, select the subnet of the global router as the subnet.
-
Apply the changes depending on the Apply Changes parameter in the Port Setup block. The value of the parameter can be viewed in Control Panel under Cloud Platform → Servers → Cloud Server page:
- When rebooting the server — programmatically reboot the node or manually make changes to the network configuration file on the node;
- Manually in the network configuration file on the server — Manually make changes to the network configuration file on the node.
Write routes on Managed Kubernetes cluster node
If you have created a new Managed Kubernetes cluster and added a node to an existing global router network, no routes need to be written. In this case, the node will be immediately available to other devices on the network.
If you are adding an existing node to the global router network, it must have static routes to all subnets with which you want connectivity. To do so, create a ticket.
Create file storage
-
In Control Panel, go to Cloud Platform → File Storage.
-
Click Create Storage.
-
Enter a new storage name or leave the name that is automatically created.
-
Select the pool where the storage will be located.
-
Select the subnet of the Servercore Global Router private network that you connected to the router for file storage.
-
Select file storage type. Storages differ in read/write speeds and bandwidth values:
-
HDD Basic;
-
SSD Universal;
-
SSD Fast.
Once created, the storage type cannot be changed.
-
-
Specify the storage size: from 50 GB to 50 TB. Once created, you can increase file-storage, but you can't decrease it.
-
Select a protocol:
-
NFSv4 — for connecting storage to servers running Linux and other Unix systems;
-
CIFS SMBv3 — for connecting the storage to Windows servers.
Once created, the protocol cannot be changed.
-
-
Check out the cost of file storage.
-
Press Create.
Mount file storage to a Managed Kubernetes cluster
The mounting process depends on the file storage protocol: mount storage using NFSv4 protocol or CIFS SMBv3.
Mount storage using NFSv4 protocol
Create PersistentVolume
-
Create a yaml file
filestorage_persistent_volume.yaml
with a manifest for PersistentVolume:apiVersion: v1
kind: PersistentVolume
metadata:
name: pv_name
spec:
storageClassName: storageclass_name
capacity:
storage: <storage_size>
accessModes:
- ReadWriteMany
nfs:
path: /shares/share-<mountpoint_uuid>
server: <filestorage_ip_address>Specify:
<storage_size>
is the size of the file storage in GB (PersistentVolume size), for example,100 Gi
. The limit is from 50 GB to 50 TB;<mountpoint_uuid>
— mount point ID. You can look in Control Panel under Cloud Platform → File Storage → Storage page → Connectivity block → GNU/Linux tab;<filestorage_ip_address>
— IP address of the file storage. You can look in control panel under Cloud Platform → File Storage → Storage page → Settings tab → IP field.
-
Create PersistentVolume — apply the manifest:
kubectl apply -f filestorage_persistent_volume.yaml
-
Verify that PersistentVolume has been created:
kubectl get pv
Create PersistentVolumeClaim
-
Create a yaml file
filestorage_persistent_volume_claim.yaml
with a manifest for PersistentVolumeClaim:apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc_name
spec:
storageClassName: storageclass_name
accessModes:
- ReadWriteMany
resources:
requests:
storage: <storage_size>Specify
<storage_size>
— the file storage size in GB (PersistentVolume size), for example,100 Gi
. The limit is from 50 GB to 50 TB. -
Create PersistentVolumeClaim — apply the manifest:
kubectl apply -f filestorage_persistent_volume_claim.yaml
-
Check that PersistentVolumeClaim has been created:
kubectl get pvc
Add storage to container
-
Create a yaml file
deployment.yaml
with the manifest for Deployment:apiVersion: apps/v1
kind: Deployment
metadata:
name: filestorage_deployment_name
labels:
project: filestorage_deployment_name
spec:
replicas: 2
selector:
matchLabels:
project: filestorage_project_name
template:
metadata:
labels:
project: filestorage_project_name
spec:
volumes:
- name: volume_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: container-nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- name: volume_name
mountPath: <mouth_path>Specify
<mouth_path>
— the path to the folder inside the container to which the file storage will be mounted. -
Create Deployment — apply the manifest:
kubectl apply -f deployment.yaml
Mount storage using CIFS SMBv3 protocol
- Install the CSI driver for Samba.
- Create a secret to store login and password.
- Create StorageClass.
- Create PersistentVolumeClaim.
- Add file storage to container.
Install CSI driver for Samba
-
Download the CSI driver from GitHub Kubernetes CSI.
-
Install the latest driver version:
helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.4.0 -
Check that the pods are installed and running:
kubectl --namespace=kube-system get pods --selector="app=csi-smb-controller"
Create a secret
File storage does not support access rights differentiation. CIFS SMBv3 access is performed under the guest
user.
Create a secret to store the login and password (default is guest/guest
):
kubectl create secret generic smbcreds --from-literal username=guest --from-literal password=guest
Create StorageClass
-
Create a
filestorage_storage_storage_class.yaml
file with a manifest for StorageClass:apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storageclass_name
provisioner: smb.csi.k8s.io
parameters:
source: "//<filestorage_ip_address>/share-<mountpoint_uuid>"
csi.storage.k8s.io/provisioner-secret-name: "smbcreds"
csi.storage.k8s.io/provisioner-secret-namespace: "default"
csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
csi.storage.k8s.io/node-stage-secret-namespace: "default"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- dir_mode=0777
- file_mode=0777Specify:
<mountpoint_uuid>
— mount point ID. You can look in Control Panel under Cloud Platform → File Storage → Storage page → Connectivity block → GNU/Linux tab;<filestorage_ip_address>
— IP address of the file storage. You can look in control panel under Cloud Platform → File Storage → Storage page → Settings tab → IP field.
-
Create StorageClass — apply the manifest:
kubectl apply -f filestorage_storage_class.yaml
-
Verify that the StorageClass has been created:
kubectl get storageclass
Create PersistentVolumeClaim
-
Create a yaml file
filestorage_persistent_volume_claim.yaml
with a manifest for PersistentVolumeClaim:apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc_name
annotations:
volume.beta.kubernetes.io/storage-class: smb
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: <storage_size>Specify
<storage_size>
— the file storage size in GB (PersistentVolume size), for example,100 Gi
. The limit is from 50 GB to 50 TB. -
Create PersistentVolumeClaim — apply the manifest:
kubectl apply -f filestorage_persistent_volume_claim.yaml
-
Check that PersistentVolumeClaim has been created:
kubectl get pvc
Add storage to container
-
Create a yaml file
deployment.yaml
with the manifest for Deployment:apiVersion: apps/v1
kind: Deployment
metadata:
name: filestorage_deployment_name
labels:
project: filestorage_deployment_name
spec:
replicas: 2
selector:
matchLabels:
project: filestorage_project_name
template:
metadata:
labels:
project: filestorage_project_name
spec:
volumes:
- name: volume_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: container-nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- name: volume_name
mountPath: <mouth_path>Specify
<mouth_path>
— the path to the folder inside the container to which the file storage will be mounted. -
Create Deployment — apply the manifest:
kubectl apply -f deployment.yaml