Skip to main content
Connect a network drive to a dedicated Linux, Windows server

Connect a network drive to a dedicated Linux, Windows server

Network disks are available for connection to dedicated servers in the MSK-1 pool.You can connect a network disk to dedicated servers of a ready-made configuration with a tag You can also connect network disks to dedicated servers of an arbitrary configuration with an additional 2 × 10 GE NIC + 10 Gbps Network Disk SAN connection.

You can connect the network disk to one or more servers.

  1. Create a SAN.
  2. Connect the network drive to the server.
  3. Connect the network disk to the server in the server OS.
  4. Configure MPIO.
  5. Optional: connect the network drive to another server.
  6. Prepare the network drive for operation.

1. Create a SAN network

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.
  2. Go to Network Disks and StorageNetwork Disks tab.
  3. Open the disk page → Server Connection tab.
  4. Click Create SAN.
  5. Click Add SAN.
  6. Select an availability zone.
  7. Enter a subnet or leave the subnet that is generated by default. The subnet must belong to the private address range 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 and must not already be in use in your infrastructure.
  8. Click Create SAN.

2. Connect the network drive to the server

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.
  2. Go to Network Disks and StorageNetwork Disks tab.
  3. Open the disk page → Server Connection tab.
  4. In the Server field, click Select.
  5. Select the server to which the network drive will be connected.

3. Connect the network disk to the server in the server OS

You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.

  1. Connect to the server via SSH or via KVM console.

  2. Open the netplan utility configuration file with the vi text editor:

    vi /etc/netplan/50-cloud-init.yaml
  3. On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:

        <eth_name_1>:
    addresses:
    - <ip_address_1>
    routes:
    - to: <destination_subnet_1>
    via: <next_hop_1>
    <eth_name_2>:
    addresses:
    - <ip_address_2>
    routes:
    - to: <destination_subnet_2>
    via: <next_hop_2>

    Specify:

    • <eth_name_1> — name of the first network interface, it is configured on the first port of the network card;
    • <eth_name_2> — name of the second network interface, it is configured on the second port of the network card;
    • <ip_address_1> — The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <ip_address_2> — The IP address of the second port on the network card. You can view it in control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <destination_subnet_1> — the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
    • <destination_subnet_2> — The destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
    • <next_hop_1> — gateway for the first port on the network card. You can see it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway);
    • <next_hop_2> — gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
  4. Exit the vi text editor with your changes saved:

    :wq
  5. Apply the configuration:

    netplan apply
  6. Print the information about the network interfaces and verify that they are configured correctly:

    ip a
  7. Optional: reboot the server.

  8. Verify that the speed of each interface is at least 10 GBit/sec:

    ethtool <eth_name_1> | grep -i speed
    ethtool <eth_name_2> | grep -i speed

    Specify <eth_name_1> and <eth_name_2> as the names of the network interfaces configured in step 3.

  9. If the speed is below 10 Gbps, create a ticket.

  10. Verify that the iSCSI target is available:

    ping -c5 <iscsi_target_ip_address_1>
    ping -c5 <iscsi_target_ip_address_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
  11. Enter the name of the iSCSI initiator:

    vi /etc/iscsi/initiatorname.iscsi
    InitiatorName= <initiator_name>

    Specify <initiator_name> — iSCSI initiator name. You can view it in the control panel: in the top menu, click ProductsDedicated ServersNetwork Disks and StorageNetwork Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field.

  12. Restart iSCSI:

    systemctl restart iscsid.service
    systemctl restart multipathd.service
  13. Create iSCSI interfaces:

    iscsiadm -m iface -I <iscsi_eth_name_1> --op new
    iscsiadm -m iface -I <iscsi_eth_name_2> --op new

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  14. Bind the iSCSI interfaces to the network interfaces you configured in step 3:

    iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
    iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 12;
    • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 12;
    • <eth_name_1> — the name of the first network interface you configured in step 3;
    • <eth_name_2> — the name of the second network interface you configured in step 3.
  15. Check the availability of the iSCSI target through the iSCSI interfaces:

    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 13;
    • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 13.

    A list of iSCSI tags will appear in the response. For example:

    10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
    10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target

    Here:

    • 10.100.1.2:3260 — IP address of the first iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;
    • 10.100.1.6:3260 — IP address of the second iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the second iSCSI target.
  16. Configure CHAP authentication on the iSCSI-Initiator:

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP

    iscsiadm --mode node -T <iqn> --op update -n node.session.auth.username --value <username>

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iqn> — IQNs of the first and second iSCSI target. You can view them in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;
    • <username> — username to authorize the iSCSI initiator. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;
    • <password> — password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
  17. Authorize on the iSCSI target through iSCSI interfaces:

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>

    Specify:

    • <iqn> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  18. Verify that the iSCSI session for each iSCSI target has started:

    iscsiadm -m session

    Two active iSCSI sessions will appear in the response. For example:

    tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
    tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)

    Here [1] and [3] are the iSCSI session numbers.

  19. Enable automatic disk mount when the server restarts by setting the node.startup parameter to automatic:

    iscsiadm --mode node -T  <iqn> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
    systemctl enable iscsid.service
    systemctl restart iscsid.service

    Specify:

    • <iqn> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target.
  20. Optional: reboot the server.

4. Customize MPIO

MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.

In Ubuntu OS, MPIO is configured by default, check the settings.

  1. Open the configuration file of the Device Mapper Multipath utility with the vi text editor:

    vi /etc/multipath.conf
  2. Make sure that the /etc/multipath.conf file contains only the following lines:

    defaults {
    user_friendly_names yes
    }
  3. Make sure the bindings file has information about the WWID of the block device:

    cat /etc/multipath/bindings

    The command output will display information about the WWID of the block device. For example:

    # Format:
    # alias wwid
    #
    mpatha 3600140530fab7e779fa41038a0a08f8e
  4. Make sure that the wwids file has information about the WWID of the block device:

    cat /etc/multipath/wwids

    The command output will display information about the WWID of the block device. For example:

    # Valid WWIDs:
    /3600140530fab7e779fa41038a0a08f8e/
  5. Check the network disk connection and make sure that the policy parameter is set to service-time 0:

    multipath -ll

    The command output will display information about devices, paths, and current policy. For example:

    mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
    size=20G features='0' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=10 status=active
    | `- 8:0:0:0 sdc 8:32 active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
    `- 9:0:0:0 sdd 8:48 active ready running

5. Optional: Connect the network drive to another server

  1. Connect the network drive to the server in the control panel.
  2. Connect the network disk to the server in the server OS.
  3. Configure MPIO.

6. Prepare the network drive for operation

After connecting the network disk to the server, you can format it to the desired file system.

  • A Cluster File System (CFS) is a file system that allows multiple servers (nodes) to simultaneously work with the same data on shared storage. Examples of cluster file systems:

    • GFS2 (Global File System 2), more details in the GFS2 Overview article of the official Red Hat documentation;
    • OCFS2 (Oracle Cluster File System 2), more details in the official Oracle Linux documentation.
  • Logical Volume Manager (LVM) is storage virtualization software designed for flexible management of physical storage devices. For more information, see Configuring and managing logical volumes in the official Red Hat documentation;

  • a standard file system, such as ext4 or XFS. Note that in read-write mode, such a file system can only be used on one server at a time to avoid data corruption. It is recommended to use clustered file systems for multiple servers to share access;

  • VMFS (VMware File System) is a clustered file system used by VMware ESXi to store virtual machine files. It supports storage sharing among multiple ESXi hosts. VMFS automatically manages locks — preventing virtual machine files from being modified at the same time to ensure data integrity. Learn more in the VMware vSphere VMFS instructions in the official VMware Storage documentation.