Connect a network drive to a dedicated server
You can connect the network disk to one or more servers.
To connect a network drive to multiple servers, you must make a configuration for each server to which the network drive connects.
- Create a SAN.
- Connect the network drive to the server.
- Connect the network drive to the server in the server OS.
- Check the MPIO settings.
- Optional: connect the network drive to another server.
- Prepare the network drive for operation.
1. Create a SAN network
- in control panels from the top menu, press Products and select Dedicated servers.
- Go to the section Network disks and storage → tab Network disks.
- Open the disk page → tab Connecting to the server.
- Click on the link Create a SAN.
- Click Add SAN.
- Select accessibility zone.
- Enter a subnet or leave the subnet that is generated by default. The subnet must belong to a private address range
10.0.0.0/8
,172.16.0.0/12
or192.168.0.0/16
and should not already be in use in your infrastructure. - Click Create a SAN.
2. Connect the network drive to the server
- in control panels from the top menu, press Products and select Dedicated servers.
- Go to the section Network disks and storage → tab Network disks.
- Open the disk page → tab Connecting to the server.
- In the field Server click Select.
- Select the server to which the network disk will be connected. Network drives are available for connection to dedicated servers in pool MSK-1. You can connect the network disk to dedicated servers of a ready configuration with the tag You can connect network drives and to dedicated servers of custom configuration with an additional 2 × 10 GE network card + connection to a SAN network of 10 Gbps network disks.
3. Connect the network disk to the server in the server OS
You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.
Connect manually
Connect using a script
Ubuntu
-
Connect to the server via SSH or through KVM console.
-
Open the utility configuration file
netplan
text editorvi
:vi /etc/netplan/50-cloud-init.yaml
-
On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:
<eth_name_1>:
addresses:
- <ip_address_1>
routes:
- to: <destination_subnet_1>
via: <next_hop_1>
<eth_name_2>:
addresses:
- <ip_address_2>
routes:
- to: <destination_subnet_2>
via: <next_hop_2>Specify:
<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card;<ip_address_1>
— The IP address of the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;<ip_address_2>
— The IP address of the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;<destination_subnet_1>
— destination subnet for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Destination subnetwork;<destination_subnet_2>
— destination subnet for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Destination subnetwork;<next_hop_1>
— gateway for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway);<next_hop_2>
— gateway for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Exit the text editor
vi
with the changes intact::wq
-
Apply the configuration:
netplan apply
-
Print the information about the network interfaces and verify that they are configured correctly:
ip a
-
Optional: reboot the server.
-
Check the speed of each network interface. It must be at least 10 GBit/sec:
ethtool <eth_name_1> | grep -i speed
ethtool <eth_name_2> | grep -i speedSpecify
<eth_name_1>
and<eth_name_2>
— names of the network interfaces configured in step 3. -
If the speed is below 10 Gbps, file a ticket.
-
Verify that the iSCSI target is available:
ping -c5 <iscsi_target_ip_address_1>
ping -c5 <iscsi_target_ip_address_2>Specify:
<iscsi_target_ip_address_1>
— The IP address of the first iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of iSCSI target 1;<iscsi_target_ip_address_2>
— The IP address of the second iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of the iSCSI target 2.
-
Enter the name of the iSCSI initiator:
vi /etc/iscsi/initiatorname.iscsi
InitiatorName= <initiator_name>Specify
<initiator_name>
— name of the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Initiator name; -
Restart iSCSI:
systemctl restart iscsid.service
systemctl restart multipathd.service -
Create iSCSI interfaces:
iscsiadm -m iface -I <iscsi_eth_name_1> --op new
iscsiadm -m iface -I <iscsi_eth_name_2> --op newSpecify:
<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Bind the iSCSI interfaces to the network interfaces you configured in step 3:
iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>Specify:
<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 12;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 12;<eth_name_1>
— the name of the first network interface you configured in step 3;<eth_name_2>
— the name of the second network interface you configured in step 3.
-
Check the availability of the iSCSI target through the iSCSI interfaces:
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 13;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 13.
A list of iSCSI tags will appear in the response. For example:
10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-targetHere:
10.100.1.2:3260
— IP address of the first iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;10.100.1.6:3260
— IP address of the second iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the second iSCSI target.
-
Configure CHAP authentication on the iSCSI-Initiator:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> --op update -n node.session.auth.username --value <username>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<IQN>
— IQNs of the first and second iSCSI target. You can look at the control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Target name;<username>
— user name to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Username;<password>
— password to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Password.
-
Authorize on the iSCSI target through iSCSI interfaces:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>Specify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Verify that the iSCSI session for each iSCSI target has started:
iscsiadm -m session
Two active iSCSI sessions will appear in the response. For example:
tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)Here.
[1]
and[3]
— iSCSI session numbers. -
Enable automatic disk mount when the server restarts by setting the parameter
node.startup
to automatic:iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
systemctl enable iscsid.service
systemctl restart iscsid.serviceSpecify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target.
-
Optional: reboot the server.
-
in control panels from the top menu, press Products and select Dedicated servers.
-
Go to the section Network disks and storage → tab Network disks.
-
Open the Network Disk page.
-
In the block Configuring network interfaces tab Ready configuration file.
-
Copy the parameters for the utility configuration file
netplan
. You'll need to specify<eth_name_1>
,<eth_name_2>
— the names of the network interfaces on your server. -
In the block Configuring the iSCSI connection tab Ready-made script.
-
Copy the text of the iSCSI connection configuration script.
-
Connect to the server via SSH or through KVM console.
-
Open the utility configuration file
netplan
text editorvi
:vi /etc/netplan/50-cloud-init.yaml
-
Paste the parameters you copied in step 5.
Specify
<eth_name_1>
,<eth_name_2>
— network interface names. -
Exit the text editor
vi
with the changes intact::wq
-
Create a script file with a text editor
vi
:vi <file_name>
Specify
<file_name>
— filename in the format.sh
. -
Switch to the insertion mode by pressing i.
-
Paste the script text you copied in step 7 into the file.
-
Click Esc.
-
Exit the text editor
vi
with the changes intact::wq
-
Make the script executable:
chmod +x <file_name>
Specify
<file_name>
— the name of the script file you specified in step 12. -
Run the script with arguments:
./<script_name> <eth_name_1> <eth_name_2>
<file_name>
— the name of the script file you specified in step 12;<eth_name_1>
,<eth_name_2>
— names of the network interfaces that you specified in step 10.
4. Check MPIO settings
MPIO — Multipath I/O to improve the fault tolerance of data transfer to the network disk.
MPIO is configured by default.
Ubuntu
-
Open the utility configuration file
Device Mapper Multipath
text editorvi
:vi /etc/multipath.conf
-
Make sure that the file
/etc/multipath.conf
contains only the following lines:defaults {
user_friendly_names yes
} -
Make sure that in the file
bindings
has information about the WWID of the block device:cat /etc/multipath/bindings
cat /etc/multipath/bindingsThe command output will display information about the WWID of the block device. For example:
# Format:
# alias wwid
#
mpatha 3600140530fab7e779fa41038a0a08f8e -
Make sure that in the file
wwids
has information about the WWID of the block device:cat /etc/multipath/wwids
cat /etc/multipath/wwidsThe command output will display information about the WWID of the block device. For example:
# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/
# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/ -
Check the network drive connection, and make sure that for the
policy
specified valueservice-time 0
:multipath -ll
The command output will display information about devices, paths, and current policy. For example:
mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=10 status=active
| `- 8:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 9:0:0:0 sdd 8:48 active ready running
5. Optional: Connect the network drive to another server
- Connect the network drive to the server in the control panel.
- Connect the network drive to the server in the server OS.
- Check the MPIO settings.
6. Prepare the network drive for operation
After connecting the network disk to the server, you can format it to the desired file system:
-
A Cluster File System (CFS) is a file system that allows multiple servers (nodes) to simultaneously work with the same data on shared storage. Examples of cluster file systems:
- GFS2 (Global File System 2), more details in the article GFS2 Overview Red Hat's official documentation;
- OCFS2 (Oracle Cluster File System 2), more details in the official documentation Oracle Linux.
-
Logical Volume Manager (LVM) is storage virtualization software designed for flexible management of physical storage devices. Read more in the manual Configuring and managing logical volumes Red Hat's official documentation;
-
standard file system, e.g.
ext4
orXFS
. Note, in read-write mode, such a file system can only be used on one server at a time to avoid data corruption. It is recommended to use clustered file systems for multiple servers to share access; -
VMFS (VMware File System) is a clustered file system used by VMware ESXi to store virtual machine files. It supports shared storage access by multiple ESXi hosts. VMFS automatically manages locks — preventing virtual machine files from being modified at the same time to ensure data integrity. Read more in the manual VMware vSphere VMFS VMware Storage official documentation.