Skip to main content
Managed Kubernetes: a quick start

Managed Kubernetes: a quick start

You can work with the Managed Kubernetes cluster in the control panel, via API Managed Kubernetes or Terraform.

  1. Create a cluster in the control panel.
  2. Connect to the cluster.
  3. Customize Ingress.

Create a cluster on the cloud server in the control panel

  1. Set up a cluster on a cloud server.
  2. Configure the node group.
  3. Set up automation.

1. Set up a cluster on a cloud server

  1. In the dashboard, on the top menu, click Products and select Managed Kubernetes.

  2. Click Create Cluster.

  3. Enter a name for the cluster. The name will appear in the names of the cluster objects: node groups, nodes, balancers, networks, and disks. For example, if the cluster name is kelsie, the name of the node group would be kelsie-node-gdc8q and the boot disk would be kelsie-node-gdc8q-volume.

  4. Select a region and a pool. Once the cluster is created, the pool cannot be changed.

  5. Select the Kubernetes version. After the cluster is created, you can upgrade the Kubernetes version.

  6. Select the type of cluster:

    • fault-tolerant — Control Plane is placed on three master nodes that run on different hosts in different segments of the same pool. If one of the three master nodes is unavailable, Control Plane continues to run;
    • basic — Control Plane is hosted on a single master node that runs on a single host on a single pool segment. If the master node is unavailable, Control Plane will not run.

    Once a cluster is created, the cluster type cannot be changed.

  7. Optionally: to make the cluster available on private network and inaccessible from the Internet, check the Private kube API checkbox. By default the cluster is created in public network and it is automatically assigned public IP-address of kube API, accessible from the Internet. After cluster creation the type of access to kube API cannot be changed.

  8. In the Network block, select a private subnet without access from the Internet to which all nodes in the cluster will be joined.

    To create a private subnet, select New private subnet in the Subnet for node field. The private network <cluster_name>-network, private subnet and router <cluster_name>-router will be automatically created, where <cluster_name> is the cluster name. CIDR is assigned automatically.

    If a private subnet has been created, select an existing subnet in the Subnet for nodes field. The subnet must meet the conditions:

    • subnet must be connected to the cloud router;
    • subnet must not overlap with the ranges 10.250.0.0/16, 10.10.0.0/16, and 10.96.0.0/12. These ranges participate in the internal addressing of Managed Kubernetes;
    • DHCP must be disabled on the subnet.
  9. Click Continue.

2. Configure the node group

  1. In the Server Type field, select Cloud Server.

  2. Select the pool segment where all worker nodes in the group will be located. Once the cluster is created, the pool segment cannot be changed.

  3. Configure the configuration of worker nodes in the group:

    3.1 Click Select Configuration and select the configuration of the worker nodes in the group:

    • arbitrary — any resource ratio can be specified;
    • or fixed with GPU — ready configurations of nodes with GPUs and with specified resource ratio.

    If the default configurations are not suitable, you can add a group of nodes with a fixed cloud server configuration via the Managed Kubernetes or Terraform API after the cluster is created.

    3.2 If you have selected an arbitrary configuration, specify the number of vCPUs, RAM, select the boot disk. Specify the disk size.

    3.3 If you have chosen fixed configuration with GPU, select a ready configuration of nodes with GPUs, boot disk and specify disk size. To install GPU drivers yourself, turn off the GPU Drivers toggle switch . By default, the GPU Drivers toggle switch is enabled and the cluster uses pre-installed drivers.

    3.4 Click Save.

  4. Configure the number of worker nodes. For fault-tolerant operation of system components, it is recommended to have at least two working nodes in the cluster, nodes can be in different groups:

    4.1 To have a fixed number of nodes in a node group, open the Fixed tab and specify the number of nodes.

    4.2 To use autoscaling in a node group — open the With autoscaling tab and set the minimum and maximum number of nodes in the group — the value of nodes will change only in this range. Autoscaling is not available for groups of nodes with GPUs without drivers.

  5. Optional: To make a node group interruptible, check the Interruptible node group checkbox . Interruptible node groups are available only in pool segments ru-7a and ru-7b.

  6. Optional: To add node group labels, open the Advanced Settings — Labels, Tints, User data block . In the Tags field, click Add. Enter the label key and value. Click Add.

  7. Optional: To add node group tints, open the Advanced Settings — Tags, tints, user data block . In the Tints field, click Add. Enter the key and value of the taint. Select an effect:

    • NoSchedule — new pods will not be added and existing pods will continue to run;
    • PreferNoSchedule — new pods will be added if there are no other available slots in the cluster;
    • NoExecute — running pods without tolerations will be removed.

    Click Add.

  8. Optional: To add a script with user parameters to configure a Managed Kubernetes cluster, open the Advanced Settings — Labels, Tints, User Data block. In the User Data field, paste the script. Examples of scripts and supported formats can be found in the User data instruction.

  9. Optional: To add an additional group of worker nodes to the cluster, click Add Node Group. You can create a cluster with groups of worker nodes in different segments of the same pool. This will increase fault tolerance and help maintain application availability if a failure occurs in one of the segments.

  10. In the Network block, select a private subnet without access from the Internet to which all nodes in the cluster will be joined.

    To create a private subnet, select New Private Subnet in the Subnet for Node field. The private network <cluster_name>-network, private subnet and router <cluster_name>-router will be automatically created, where cluster_name is the name of the cluster. CIDR is assigned automatically.

    If a private subnet has been created, select an existing subnet in the Subnet for nodes field. The subnet must meet the conditions:

    • subnet must be connected to the cloud router;
    • subnet must not overlap with the ranges 10.250.0.0.0/16, 10.10.0.0.0/16, and 10.96.0.0.0/12. These ranges participate in the internal addressing of Managed Kubernetes;
    • DHCP must be disabled on the subnet.
  11. Click Continue.

3. Set up automation

  1. Optional: To enable auto-recovery of nodes, check the Recover nodes checkbox . If the cluster has only one working node, auto-recovery is not available.

  2. Optional: To enable auto-update of patch versions, check the Install patch versions checkbox . If the cluster has only one working node, Kubernetes patch auto-update is not available.

  3. Select the cluster maintenance window — the time at which automatic cluster maintenance actions will occur.

  4. Click Create. It takes a few minutes to create the cluster, during which time the cluster will be in the CREATING status . The cluster will be ready for operation when it enters the ACTIVE status.

Connect to the cluster

To get started with the cluster, you need to configure kubectl.

For your information

We recommend that you perform all actions with nodes, balancers and disks in the cluster only through kubectl.

After you update the certificates for system components, you must reconnect to the cluster.

  1. Install the Kubernetes kubectl console client following the official instructions.

  2. In the Control Panel, go to Cloud PlatformKubernetes.

  3. Open the cluster page → Settings tab.

  4. If you use private kube API, check access to it. IP address is specified in the Kube API field.

  5. Click Download kubeconfig. The kubeconfig file download is not available if the cluster status is PENDING_CREATE, PENDING_ROTATE_CERTS, PENDING_DELETE, or ERROR.

  6. Export the path to the kubeconfig file to the KUBECONFIG environment variable:

    export KUBECONFIG=<path>

    Specify <path> -path to the kubeconfig file name_cluster.yaml.

  7. Check if the configuration is correct — access the cluster via kubectl:

    kubectl get nodes

    Nodes must be in Ready status.

Customize Ingress

Create Ingress and Ingress Controller to organize inbound traffic for the cluster.