Skip to main content

Connect a network drive to a dedicated Windows server

Network disks are available for connection to dedicated servers in the MSK-1 pool.Connect a network disk to dedicated servers:

  • You can view information about server ports in the control panel: from the top menu, click ProductsDedicated ServersServers → Server → Server page → Ports tab;
  • ready configuration with the tag You can connect network disks;
  • custom configuration with optional 2 × 10 GE NIC + 10 Gbps Network Disk SAN connection.

Connect the network drive to the server.

  1. Create a SAN.
  2. Connect the network drive to the server.
  3. Connect the network disk to the server in the server OS.
  4. Configure MPIO.
  5. Optional: connect the network drive to another server.
  6. Prepare the network drive for operation.

1. Create a SAN network

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.
  2. Go to Network Disks and StorageNetwork Disks tab.
  3. Open the SAN tab.
  4. Click Add SAN.
  5. Select an availability zone.
  6. Enter a subnet or leave the subnet that is generated by default.The subnet must belong to the private address range 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 and must not already be in use in your infrastructure.
  7. Click Create SAN.

2. Connect the network drive to the server

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.

  2. Go to Network Disks and StorageNetwork Disks tab.

  3. Open the disk page → Server Connection tab.

  4. In the Server field, click Select.

  5. Select the server to which the network drive will be connected.

  6. Click Connect.

  7. If you are connecting a network drive to a server with a private network, configure the network:

    7.1 Select VLAN.

    7.2. enter CIDR.The subnet must belong to the private address range 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 and must not already be in use in your infrastructure.

    7.3 Enter the Next hop 1 and Next hop 2 addresses from the selected private subnet.

    7.4. Click Customize.

3. Connect the network disk to the server in the server OS

You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel.You can use the script only on Ubuntu OS — more details in the instructions Connect a network disk to a dedicated server with Linux OS.

If your server is running Hyper-V, the network disk will not work.This is because the disk over an iSCSI connection does not support the SCSI-3 Persistent Reservations required for Hyper-V to run in Failover Cluster mode.

The process of connecting a network disk in the server OS through a private subnet depends on the number of ports:

  • if the server has two local ports, use the instruction for two ports;
  • If the server has only one local port or MC-LAG is configured, use the instructions for a single port.
  1. Connect to the server via SSH or via KVM console.

  2. Run PowerShell as an administrator.

  3. Print the list of network interfaces:

    Get-NetIPInterface
  4. On the network interfaces connected to the SAN switch, add IP addresses:

    New-NetIPAddress -InterfaceAlias "<eth_name_1>" -IPAddress <ip_address_1> -PrefixLength <mask_1> -DefaultGateway <next_hop_1>
    New-NetIPAddress -InterfaceAlias "<eth_name_2>" -IPAddress <ip_address_2> -PrefixLength <mask_2> -DefaultGateway <next_hop_2>

    Specify:

    • <eth_name_1> — the name of the first network interface you received in step 3;
    • <ip_address_1> — The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <mask_1> — The destination subnet mask for the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;
    • <next_hop_1> — gateway for the first port on the network card. You can see it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway);
    • <eth_name_2> — the name of the second network interface you received in step 3;
    • <ip_address_2> — The IP address of the second port on the network card. You can view it in control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <mask_2> — The destination subnet mask for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;
    • <next_hop_2> — gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
  5. Write static routes to gain access to iSCSI targets:

    route add <destination_subnet_1> mask <mask_1> <next_hop_1> -p
    route add <destination_subnet_2> mask <mask_2> <next_hop_2> -p

    Specify:

    • <destination_subnet_1> — the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;
    • <mask_1> — The destination subnet mask for the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;
    • <next_hop_1> — gateway for the first port on the network card. You can see it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway);
    • <destination_subnet_2> — The destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;
    • <mask_2> — The destination subnet mask for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;
    • <next_hop_2> — gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring Network Interfaces → column Next hop (gateway).
  6. Verify that the static routes defined in step 5 have been applied:

    route print -4
  7. Verify that the speed of each interface is at least 10 GBit/sec:

    Get-NetAdapter | Where-Object { $_.Name -eq "<eth_name_1>" } | Select-Object -Property Name,LinkSpeed
    Get-NetAdapter | Where-Object { $_.Name -eq "<eth_name_2>" } | Select-Object -Property Name,LinkSpeed

    Specify <eth_name_1> and <eth_name_2> as the names of the network interfaces configured in step 4.

  8. If the speed is below 10 Gbps, create a ticket.

  9. Verify that the iSCSI target is available:

    ping <iscsi_target_ip_address_1>
    ping <iscsi_target_ip_address_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
  10. Print information about the Microsoft iSCSI Initiator Service:

    Get-Service MSiSCSI

    The response will display information about the status of the service. For example:

    Status   Name               DisplayName
    ------ ---- -----------
    Running MSiSCSI Microsoft iSCSI Initiator Service

    Here, the Status field displays the current status of the service.

  11. If the Microsoft iSCSI Initiator Service is in Stopped status, start it:

    Start-Service MSiSCSI
  12. Enable autorun of the Microsoft iSCSI Initiator Service:

    Set-Service -Name MSiSCSI -StartupType Automatic
  13. Set the name of the iSCSI initiator:

    iscsicli NodeName "<initiator_name>"

    Specify <initiator_name> — iSCSI initiator name. You can view it in the control panel: in the top menu, click ProductsDedicated ServersNetwork Disks and StorageNetwork Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field.

  14. Connect iSCSI target portals:

    New-IscsiTargetPortal -TargetPortalAddress <ip_address_portal_1> -TargetPortalPortNumber 3260 -InitiatorPortalAddress <ip_address_1>
    New-IscsiTargetPortal -TargetPortalAddress <ip_address_portal_2> -TargetPortalPortNumber 3260 -InitiatorPortalAddress <ip_address_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
    • <ip_address_1> — The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;
    • <ip_address_2> — The IP address of the second port on the network card. You can view it in control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address.
  15. Configure authentication on the iSCSI target through iSCSI interfaces:

    $iusr="<username>"
    $ipasswd="<password>"
    $sts=$(Get-IscsiTarget | Select-Object -ExpandProperty NodeAddress)

    foreach ($st in $sts) {
    $tpaddr=($st -split ":")[-1]
    Connect-IscsiTarget -NodeAddress $st -TargetPortalAddress $tpaddr -TargetPortalPortNumber 3260 -IsPersistent $true -AuthenticationType ONEWAYCHAP -ChapUsername $iusr -ChapSecret $ipasswd
    }

    Specify:

    • <username> — username to authorize the iSCSI initiator. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;
    • <password> — password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
  16. Print a list of iSCSI tags:

    Get-IscsiTarget

    A list of iSCSI tags will appear in the response. For example:

    IsConnected NodeAddress                                                  PSComputerName
    ----------- ----------- --------------
    True iqn.2001-07.com.ceph:user-target-99999:203.0.113.101
    True iqn.2001-07.com.ceph:user-target-0398327:203.0.113.102
  17. Ensure that IsConnected is set to True for each iSCSI target.

  18. Check that the network drive appears in the list of available disks:

    Get-Disk | Select-Object Number, FriendlyName, SerialNumber, BusType, OperationalStatus

    A list of disks will appear in the response. For example:

    Number FriendlyName           SerialNumber       BusType  OperationalStatus
    ------ ------------ ------------ ------- -----------------
    0 Samsung SSD 860 EVO Z3AZNF0N123456 SATA Online
    1 WDC WD2003FZEX-00Z4SA0 WD-1234567890 SATA Online
    2 Virtual iSCSI Disk 0001-9A8B-CD0E1234 iSCSI Online
    3 SanDisk Ultra USB 4C531001230506 USB Online

    Here:

    • BusType — disk type;
    • 2 — network disk number;
    • OperationalStatus — The status of the network disk, Offline or Online.
  19. If the status of the network drive is Offline, change it to Online:

    Set-Disk -Number <block_storage_number> -IsOffline $false

    Specify <block_storage_number> is the network drive number you obtained in step 18.

  20. Initialize the network drive:

    Initialize-Disk -Number <block_storage_number> -PartitionStyle GPT

    Specify <block_storage_number> is the network drive number you obtained in step 18.

  21. If you are connecting a network drive to the server for the first time, create and format a partition on the network drive:

    21.1 Create a partition on the network drive:

    New-Partition -DiskNumber <block_storage_number> -UseMaximumSize -AssignDriveLetter

    Specify <block_storage_number> is the network drive number you obtained in step 18.

    21.2 Format the network disk partition to the desired file system:

    • If you are connecting the network disk to only one server, format the network disk partition to the NTFS file system:

      Format-Volume -DriveLetter <volume_letter> -FileSystem NTFS -NewFileSystemLabel "<label>"

      Specify:

      • <volume_letter> — volume letter;
      • <label> — label of the file system (volume).
    • If you are connecting a single network drive to two or more servers, you must use the ReFS file system in conjunction with CSV (Cluster Shared Volumes) — see the Resilient File System (ReFS) overview article in the Microsoft documentation for more information.

4. Customize MPIO

MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.

  1. Disable iSCSI sessions:

    $session = Get-IscsiSession
  2. Install the MPIO components:

    Install-WindowsFeature Multipath-IO
  3. Turn on the MPIO:

    Enable-WindowsOptionalFeature -Online -FeatureName MultiPathIO
  4. Get a list of devices that support MPIO:

    mpclaim.exe -e

    The command output will display devices that support MPIO.For example:

    "Target H/W Identifier   "   Bus Type     MPIO-ed      ALUA Support
    -------------------------------------------------------------------------------
    "LIO-ORG TCMU device " iSCSI NO Implicit Only

    Here LIO-ORG TCMU device is the network disk ID.

  5. Enable MPIO support for the network drive:

    mpclaim.exe -r -i -d "<block_storage_device>"

    Specify <block_storage_device> is the network disk ID you obtained in step 4. Note that the ID must be entered with spaces.

  6. Check the status of the MPIO:

    Get-MPIOAvailableHW

    The command output displays the MPIO status for the network drive.For example:

    VendorId ProductId        IsMultipathed   IsSPC3Supported BusType
    -------- --------- ------------- --------------- -------
    LIO-ORG TCMU device True True iSCSI

    Here, the IsMultipathed field displays the status of the MPIO.

  7. Ensure that the MPIO device path accessibility check mechanism is enabled:

    (Get-MPIOSetting).PathVerificationState

    The command output will display the status of the MPIO device path accessibility mechanism.For example:

    Enabled
  8. If the MPIO device path accessibility check mechanism is in Disabled status, enable it:

    Set-MPIOSetting -NewPathVerificationState Enabled
  9. Associate the volumes on the network disk with logical partitions in the server OS:

    iscsicli.exe BindPersistentDevices
  10. Allow the server OS to access the contents of the network disk volumes:

    iscsicli.exe BindPersistentVolumes
  11. Make sure that the network drive is registered as a persistent device in the server OS configuration:

    iscsicli.exe ReportPersistentDevices

    The response will show information about the network drive as a persistent device.For example:

    Persistent Volumes
    "D:\"

    Here, D:\ is a volume on the network drive.

5. Optional: connect the network drive to another server

  1. Connect the network drive to the server in the control panel.
  2. Connect the network disk to the server in the server OS.
  3. Configure MPIO.

6. Prepare the network drive for operation

You can format the network disk that you connected to the server to the desired file system:

  • ReFS (Resilient File System) is a fault-tolerant file system designed to improve data availability, scale large data sets across workloads, and provide data integrity with resistance to corruption.If you are connecting a single network drive to two or more servers, you must use the ReFS file system in conjunction with CSV (Cluster Shared Volumes) — see the Resilient File System (ReFS) overview article in the official Microsoft documentation for more information;
  • A standard file system such as NTFS (New Technology File System).Note, NTFS file system does not support simultaneous read-write access from multiple servers to avoid data corruption.To share multiple servers, use specialized file systems.