Skip to main content
Managed Kubernetes Product Description

Managed Kubernetes Product Description

Servercore's Managed Kubernetes simplifies the process of deploying, scaling and maintaining your Kubernetes container infrastructure. Servercore is responsible for version updates, security and uptime of Control Plane Kubernetes.

The product supports user types and roles, projects and project limits and quotas.

Versions

Managed Kubernetes clusters support versions 1.30.x, 1.31.x, 1.32.x

How Managed Kubernetes works

Managed Kubernetes runs on Servercore's cloud platform and uses its resources for cluster nodes: cloud servers, load balancers, networks, disks.

Containerd is used as the container execution environment (CRI) . Calico is used as the CNI in Managed Kubernetes clusters.

You can work with the Managed Kubernetes cluster in the control panel and through the Managed Kubernetes API.

Cluster composition

Managed Kubernetes clusters consist of:

A group of working nodes must be in the same accessibility zone as the master nodes. For more information, see Working with node groups.

Types of cluster

Servercore provides two types of Managed Kubernetes clusters: fault-tolerant and basic.

You can only select a cluster type when creating a cluster. Once the cluster is created, the cluster type cannot be changed.

Fault-tolerantBasic
Number of master notes31
Master Node Allocation
  • in different segments of the pool — if there are several segments in the pool;
  • or in the same pool segment on different hosts — if there is only one segment in the pool
In one segment of the pool
Fault toleranceIf one of the three master nodes is unavailable, Control Plane will continue to runIf the master node is unavailable, Control Plane will not work
SLA(99,98%)
FunctionalityAll functionalities are availableAuto-update of patch versions is not available
What tasks are suitable forFor the working environment (production)
  • for the developer's environment (development);
  • test environment (testing and staging);
  • pet projects

Limits

Maximum number of fault-tolerant Kubernetes clusters in one pool for one project10
Maximum number of Kubernetes base clusters in one pool for one project10
Maximum number of node groups in one pool for one project100
Maximum number of nodes in one node group15
Maximum number of vCPU nodes32*
Maximum number of RAM nodes256*GB
Maximum size of the node boot disk1.2 TB.
Maximum number of pods on one node100
Maximum number of persistent volumes (PV) per node256
Minimum size of one permanent volume (PV)1 GB

* You can create nodes with more vCPUs and RAM — use the fixed cloud server configurations.

Cluster limitations on dedicated servers

Managed Kubernetes clusters on dedicated servers are in beta testing.

Not supported during beta testing:

  • use of arbitrary configurations of dedicated servers;
  • adding existing dedicated servers to the cluster;
  • adding multiple groups of nodes when creating a cluster;
  • updating minor versions of Kubernetes;
  • automation: auto-update of patch versions, auto-scaling and auto-recovery;
  • Permanent Volume (PV) connection based on the cloud platform's network disks;
  • use of user data;
  • Using dedicated server configurations with GPUs;
  • Terraform usage.

Areas of responsibility

Servercore provides

  • creation and accessibility of master notes;
  • creation of working nodes;
  • updating versions of the Managed Kubernetes cluster;
  • masternode monitoring;
  • possibility of autoscaling of nodes;
  • Node auto-recovery capability;
  • integration with Servercore services;
  • technical support.

Servercore is not responsible for

  • for managing the Managed Kubernetes cluster;
  • node management;
  • application creation;
  • initiating scaling and upgrades.