Connect a network drive to the server
You can connect the network disk to one or more servers.
To connect a network drive to multiple servers, you must make a configuration for each server to which the network drive connects.
- Create a SAN.
- Connect the network drive to the server in the control panel.
- Connect the network drive to the server in the server OS.
- Check the MPIO settings.
- Optional: connect the network drive to another server.
- Prepare the network drive for operation.
1. Create a SAN network
- В control panels from the top menu, press Products and select Dedicated servers.
- Go to the section Network disks and storage → tab Network disks.
- Open the disk page → tab Connecting to the server.
- Click on the link Create a SAN.
- Click Add SAN.
- Select accessibility zone.
- Enter a subnet or leave the subnet that is generated by default. The subnet must belong to a private address range
10.0.0.0/8
,172.16.0.0/12
or192.168.0.0/16
and should not already be in use in your infrastructure. - Click Create a SAN.
2. Connect the network drive to the server in the control panel
- В control panels from the top menu, press Products and select Dedicated servers.
- Go to the section Network disks and storage → tab Network disks.
- Open the disk page → tab Connecting to the server.
- In the field Server click Select.
- Select the server to which the network disk will be connected. Network drives are available for connection to dedicated servers в bullet MSK-1. You can connect the network disk to dedicated servers of a ready configuration with the tag You can connect network drives and to dedicated servers of arbitrary configuration with an additional 2 × 10 GE network card + connection to a SAN network of 10 Gbps network disks.
3. Connect the network disk to the server in the server OS
Ubuntu
-
Connect to the server via SSH or through KVM console.
-
Open the utility configuration file
netplan
with the vi text editor:vi /etc/netplan/50-cloud-init.yaml
-
Add IP addresses to the network interfaces connected to the SAN switch, set the MTU size, and prescribe routes to access iSCSI targets:
<eth_name_1>:
addresses:
- <ip_address_1>
mtu: 9000
routes:
- to: <destination_subnet_1>
via: <next_hop_1>
<eth_name_2>:
addresses:
- <ip_address_2>
mtu: 9000
routes:
- to: <destination_subnet_2>
via: <next_hop_2>Specify:
<eth_name_1>
— name of the first network interface. The first network interface is configured on the first port of the network card;<eth_name_2>
— name of the second network interface. The second network interface is configured on the second port of the network card;<ip_address_1>
— The IP address of the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → partition iSCSI initiator parameters → field IP address of the first port #1 of the network card;<ip_address_2>
— The IP address of the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → partition iSCSI initiator parameters → field IP address of port No. 2 of the network card;<destination_subnet_1>
— destination subnet for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Destination subnetwork;<destination_subnet_2>
— destination subnet for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Destination subnetwork;<next_hop_1>
— gateway for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Next hop (gateway);<next_hop_2>
— gateway for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Next hop (gateway).
-
Exit the vi text editor with your changes saved:
:wq
-
Apply the configuration:
netplan apply
-
Print the information about the network interfaces and verify that they are configured correctly:
ip a
-
Optional: restart the server.
-
Check the speed of each network interface. It must be at least 10 GBit/sec:
ethtool <eth_name_1> | grep -i speed
ethtool <eth_name_2> | grep -i speedSpecify
<eth_name_1>
и<eth_name_2>
— names of the network interfaces configured in step 3. -
If the speed is below 10 Gbps, file a ticket. If the speed is greater than or equal to 10 Gbps, go to step 10.
-
Verify that the iSCSI target is available:
ping -c5 <iscsi_target_ip_address_1>
ping -c5 <iscsi_target_ip_address_2>
ethtool <eth_name_2> | grep -i speedSpecify:
<iscsi_target_ip_address_1>
— The IP address of the first iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section Disk parameters for iSCSI connection → field IP address of iSCSI target 1;<iscsi_target_ip_address_2>
— The IP address of the second iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section Disk parameters for iSCSI connection → field IP address of the iSCSI target 2.
-
Set the name of the iSCSI initiator:
vi /etc/iscsi/initiatorname.iscsi
InitiatorName= <initiator_name>Specify
<initiator_name>
— name of the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → partition iSCSI initiator parameters → field Initiator's name; -
Restart iSCSI:
systemctl restart iscsid.service
systemctl restart multipathd.service -
Create iSCSI interfaces:
iscsiadm -m iface -I <iscsi_eth_name_1> --op new
iscsiadm -m iface -I <iscsi_eth_name_2> --op newSpecify:
<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Bind the iSCSI interfaces to the network interfaces configured in step 3:
iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>Specify:
<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 12;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 12;<eth_name_1>
— the name of the first network interface you configured in step 3;<eth_name_2>
— the name of the second network interface you configured in step 3.
-
Check the availability of the iSCSI target through the iSCSI interfaces:
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 13;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 13.
A list of iSCSI tags will appear in the response. For example:
10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-targetHere:
10.100.1.2:3260
— IP address of the first iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;10.100.1.6:3260
— IP address of the second iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the second iSCSI target.
-
Configure CHAP authentication on the iSCSI-Initiator:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> --op update -n node.session.auth.username --value <username>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<IQN>
— IQNs of the first and second iSCSI target. You can look at the control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section Disk parameters for iSCSI connection → field Target Name;<username>
— user name to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section CHAP authentication → field Username;<password>
— password to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section CHAP authentication → field Password.
-
Authorize on the iSCSI target through iSCSI interfaces:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>Specify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Verify that the iSCSI session for each iSCSI target has started:
iscsiadm -m session
Two active iSCSI sessions will appear in the response. For example:
tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)Here.
[1]
и[3]
— iSCSI session numbers. -
To have the disks mount automatically on reboot, set the node.startup iSCSI-sessions setting to automatic:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
systemctl enable iscsid.service
systemctl restart iscsid.serviceSpecify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target.
-
Optional: restart the server.
4. Check MPIO settings
MPIO — Multipath I/O to improve the fault tolerance of data transfer to the network disk.
Ubuntu
-
Open the utility configuration file
Device Mapper Multipath
word processorvi
:vi /etc/multipath.conf
-
Make sure that the file
/etc/multipath.conf
contains only the following lines:defaults {
user_friendly_names yes
} -
Make sure that in the file
bindings
has information about the WWID of the block device:cat /etc/multipath/bindings
The command output will display information about the WWID of the block device. For example:
# alias wwid
#
mpatha 3600140530fab7e779fa41038a0a08f8e -
Check the WWID information of the block device:
cat /etc/multipath/wwids
Make sure that in the file
wwids
has information about the WWID of the block device. Example output:# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/ -
Check the network disk connection:
multipath -ll
Make sure that for the parameter
policy
specified valueservice-time 0
. Example output:mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=10 status=active
| `- 8:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 9:0:0:0 sdd 8:48 active ready running
5. Optional: Connect the network drive to another server
- Connect the network drive to the server in the control panel.
- Connect the network drive to the server in the server OS.
- Check the MPIO settings.
6. Prepare the network drive for operation
After connecting the network disk to the server, you can format it to the desired file system:
-
A Cluster File System (CFS) is a file system that allows multiple servers (nodes) to simultaneously work with the same data on shared storage. Examples of cluster file systems:
- GFS2 (Global File System 2), more details in the article GFS2 Overview Red Hat's official documentation;
- OCFS2 (Oracle Cluster File System 2), more details in the official documentation Oracle Linux.
-
Logical Volume Manager (LVM) is storage virtualization software designed for flexible management of physical storage devices. Read more in the manual Configuring and managing logical volumes Red Hat's official documentation;
-
standard file system, e.g.
ext4
orXFS
. Note, in read-write mode, such a file system can only be used on one server at a time to avoid data corruption. It is recommended to use clustered file systems for multiple servers to share access; -
VMFS (VMware File System) is a clustered file system used by VMware ESXi to store virtual machine files. It supports shared storage access by multiple ESXi hosts. VMFS automatically manages locks — preventing virtual machine files from being modified at the same time to ensure data integrity. Read more in the manual VMware vSphere VMFS VMware Storage official documentation.