Skip to main content
Connect a network drive to the server

Connect a network drive to the server

Network disk — is a scalable external network block storage with triple data replication. Triple replication of disk volumes provides high data integrity. Suitable for rapid scaling of server disk space.

Network disks are available for connection to dedicated servers in the pool MSK-1. You can connect the network disk to dedicated servers of a ready configuration with the tag You can connect network drives and to dedicated servers of custom configuration with an additional 2 × 10 GE network card + connection to a SAN network of 10 Gbps network disks.

If you don't have a network drive, create it and create a SAN for the accessibility zone.

  1. Connect the network drive to the server in the control panel.
  2. Connect the network drive to the server in the server OS.
  3. Check the MPIO settings.

1. Connect the network drive to the server in the control panel

  1. in control panels from the top menu, press Products and select Dedicated servers.
  2. Open the server page → tab Network disks.
  3. Click Connect a network drive.
  4. Select a network drive.
  5. Click .

2. Connect the network disk to the server in the server OS

You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.

  1. Connect to the server via SSH or through KVM console.

  2. Open the utility configuration file netplan word processor vi:

    vi /etc/netplan/50-cloud-init.yaml
  3. On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:

        <eth_name_1>:
    addresses:
    - <ip_address_1>
    routes:
    - to: <destination_subnet_1>
    via: <next_hop_1>
    <eth_name_2>:
    addresses:
    - <ip_address_2>
    routes:
    - to: <destination_subnet_2>
    via: <next_hop_2>

    Specify:

    • <eth_name_1> — name of the first network interface, it is configured on the first port of the network card;
    • <eth_name_2> — name of the second network interface, it is configured on the second port of the network card;
    • <ip_address_1> — The IP address of the first port on the network card. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;
    • <ip_address_2> — The IP address of the second port on the network card. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;
    • <destination_subnet_1> — destination subnet for the first port on the network card. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Destination subnetwork;
    • <destination_subnet_2> — destination subnet for the second port on the network card. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Destination subnetwork;
    • <next_hop_1> — gateway for the first port on the network card. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway);
    • <next_hop_2> — gateway for the second port on the network card. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway).
  4. Exit the text editor vi with the changes intact:

    :wq
  5. Apply the configuration:

    netplan apply
  6. Print the information about the network interfaces and verify that they are configured correctly:

    ip a
  7. Optional: reboot the server.

  8. Check the speed of each network interface. It must be at least 10 GBit/sec:

    ethtool <eth_name_1> | grep -i speed
    ethtool <eth_name_2> | grep -i speed

    Specify <eth_name_1> и <eth_name_2> — names of the network interfaces configured in step 3.

  9. If the speed is below 10 Gbps, file a ticket.

  10. Verify that the iSCSI target is available:

    ping -c5 <iscsi_target_ip_address_1>
    ping -c5 <iscsi_target_ip_address_2>

    Specify:

    • <iscsi_target_ip_address_1> — The IP address of the first iSCSI target. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of iSCSI target 1;
    • <iscsi_target_ip_address_2> — The IP address of the second iSCSI target. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of the iSCSI target 2.
  11. Enter the name of the iSCSI initiator:

    vi /etc/iscsi/initiatorname.iscsi
    InitiatorName= <initiator_name>

    Specify <initiator_name> — name of the iSCSI initiator. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Initiator name.

  12. Restart iSCSI:

    systemctl restart iscsid.service
    systemctl restart multipathd.service
  13. Create iSCSI interfaces:

    iscsiadm -m iface -I <iscsi_eth_name_1> --op new
    iscsiadm -m iface -I <iscsi_eth_name_2> --op new

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  14. Bind the iSCSI interfaces to the network interfaces you configured in step 3:

    iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
    iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>

    Specify:

    • <iscsi_eth_name_1> — the name of the first iSCSI interface you created in step 12;
    • <iscsi_eth_name_2> — name of the second iSCSI interface that you created in step 12;
    • <eth_name_1> — the name of the first network interface you configured in step 3;
    • <eth_name_2> — the name of the second network interface you configured in step 3.
  15. Check the availability of the iSCSI target through the iSCSI interfaces:

    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — the name of the first iSCSI interface you created in step 13;
    • <iscsi_eth_name_2> — name of the second iSCSI interface that you created in step 13.

    A list of iSCSI tags will appear in the response. For example:

    10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
    10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target

    Here:

    • 10.100.1.2:3260 — IP address of the first iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;
    • 10.100.1.6:3260 — IP address of the second iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the second iSCSI target.
  16. Configure CHAP authentication on the iSCSI-Initiator:

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP

    iscsiadm --mode node -T <IQN> --op update -n node.session.auth.username --value <username>

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <IQN> — IQNs of the first and second iSCSI target. You can look at the control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Target name;
    • <username> — user name to authorize the iSCSI initiator. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Username;
    • <password> — password to authorize the iSCSI initiator. You can look in control panels: from the top menu, press ProductsDedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Password.
  17. Authorize on the iSCSI target through iSCSI interfaces:

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>

    Specify:

    • <IQN> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  18. Verify that the iSCSI session for each iSCSI target has started:

    iscsiadm -m session

    Two active iSCSI sessions will appear in the response. For example:

    tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
    tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)

    Here. [1] и [3] — iSCSI session numbers.

  19. Enable automatic disk mount when the server restarts by setting the parameter node.startup to automatic:

    iscsiadm --mode node -T  <IQN> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
    systemctl enable iscsid.service
    systemctl restart iscsid.service

    Specify:

    • <IQN> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target.
  20. Optional: reboot the server.

3. configure MPIO

MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.

In Ubuntu OS, MPIO is configured by default, check the settings.

  1. Open the utility configuration file Device Mapper Multipath word processor vi:

    vi /etc/multipath.conf
  2. Make sure that the file /etc/multipath.conf contains only the following lines:

    defaults {
    user_friendly_names yes
    }
  3. Make sure that in the file bindings has information about the WWID of the block device:

    cat /etc/multipath/bindings
    cat /etc/multipath/bindings

    The command output will display information about the WWID of the block device. For example:

    # Format:
    # alias wwid
    #
    mpatha 3600140530fab7e779fa41038a0a08f8e
  4. Make sure that in the file wwids has information about the WWID of the block device:

    cat /etc/multipath/wwids
    cat /etc/multipath/wwids

    The command output will display information about the WWID of the block device. For example:

    # Valid WWIDs:
    /3600140530fab7e779fa41038a0a08f8e/
    # Valid WWIDs:
    /3600140530fab7e779fa41038a0a08f8e/
  5. Check the network drive connection, and make sure that for the policy specified value service-time 0:

    multipath -ll

    The command output will display information about devices, paths, and current policy. For example:

    mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
    size=20G features='0' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=10 status=active
    | `- 8:0:0:0 sdc 8:32 active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
    `- 9:0:0:0 sdd 8:48 active ready running