Connect a network drive to the server
- Create a SAN.
- Connect the network drive to the server in the control panel.
- Connect the network disk to the server in the server OS.
- Check the MPIO settings.
1. Create a SAN network
- In control panels go to Servers and hardware → Network disks and storage → tab Network disks.
- Open the disk page → tab Connecting to the server.
- Click on the link Create a SAN.
- Click Add SAN.
- Select the location of the disk — accessibility zone.
- Enter a subnet or leave the subnet that is generated by default. The subnet must belong to a private address range
10.0.0.0/8
,172.16.0.0/12
or192.168.0.0/16
and should not already be in use in your infrastructure. - Click Create a SAN.
2. Connect the network drive to the server in the control panel
- In control panels go to Servers and hardware → Network disks and storage → tab Network disks.
- Open the disk page → tab Connecting to the server.
- In the field Server click Select.
- Select the server to which the network drive will be connected. You can only connect the network drive to a dedicated server in the bullet MSK-1 and with a 2 × 10 GE NIC + 10 Gbps Network Disk SAN connection.
- Click Connect.
3. Connect the network disk to the server in the server OS
Ubuntu
-
Connect to the server via SSH or through KVM console.
-
Open the utility configuration file
netplan
with the vi text editor:vi /etc/netplan/50-cloud-init.yaml
-
Add IP addresses to the network interfaces connected to the SAN switch, set the MTU size, and prescribe routes to access iSCSI targets:
<eth_name_1>:
addresses:
- <ip_address_1>
mtu: 9000
routes:
- to: <destination_subnet_1>
via: <next_hop_1>
<eth_name_2>:
addresses:
- <ip_address_2>
mtu: 9000
routes:
- to: <destination_subnet_2>
via: <next_hop_2>Specify:
<eth_name_1>
— name of the first network interface. The first network interface is configured on the first port of the network card;<eth_name_2>
— name of the second network interface. The second network interface is configured on the second port of the network card;<ip_address_1>
— The IP address of the first port on the network card. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → partition iSCSI initiator parameters → field IP address of the first port #1 of the network card;<ip_address_2>
— The IP address of the second port on the network card. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → partition iSCSI initiator parameters → field IP address of port No. 2 of the network card;<destination_subnet_1>
— destination subnet for the first port on the network card. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Destination subnetwork;<destination_subnet_2>
— destination subnet for the second port on the network card. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Destination subnetwork;<next_hop_1>
— gateway for the first port on the network card. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Next hop (gateway);<next_hop_2>
— gateway for the second port on the network card. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → partition Static routes for connecting to iSCSI targets → column Next hop (gateway).
-
Exit the vi text editor with your changes saved:
:wq
-
Apply the configuration:
netplan apply
-
Print the information about the network interfaces and verify that they are configured correctly:
ip a
-
Optional: reboot the server.
-
Check the speed of each network interface. It must be at least 10 GBit/sec:
ethtool <eth_name_1> | grep -i speed
ethtool <eth_name_2> | grep -i speedSpecify
<eth_name_1>
и<eth_name_2>
— names of the network interfaces configured in step 3. -
If the speed is below 10 Gbps, file a ticket. If the speed is greater than or equal to 10 Gbps, go to step 10.
-
Verify that the iSCSI target is available:
ping -c5 <iscsi_target_ip_address_1>
ping -c5 <iscsi_target_ip_address_2>
ethtool <eth_name_2> | grep -i speedSpecify:
<iscsi_target_ip_address_1>
— The IP address of the first iSCSI target. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section Disk parameters for iSCSI connection → field IP address of iSCSI target 1;<iscsi_target_ip_address_2>
— The IP address of the second iSCSI target. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section Disk parameters for iSCSI connection → field IP address of the iSCSI target 2.
-
Set the name of the iSCSI initiator:
vi /etc/iscsi/initiatorname.iscsi
InitiatorName= <initiator_name>Specify
<initiator_name>
— name of the iSCSI initiator. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → partition iSCSI initiator parameters → field Initiator's name; -
Restart iSCSI:
systemctl restart iscsid.service
systemctl restart multipathd.service -
Create iSCSI interfaces:
iscsiadm -m iface -I <iscsi_eth_name_1> --op new
iscsiadm -m iface -I <iscsi_eth_name_2> --op newSpecify:
<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Bind the iSCSI interfaces to the network interfaces configured in step 3:
iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>Specify:
<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 12;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 12;<eth_name_1>
— the name of the first network interface you configured in step 3;<eth_name_2>
— the name of the second network interface you configured in step 3.
-
Check the availability of the iSCSI target through the iSCSI interfaces:
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 13;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 13.
A list of iSCSI tags will appear in the response. For example:
10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-targetHere:
10.100.1.2:3260
— IP address of the first iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;10.100.1.6:3260
— IP address of the second iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the second iSCSI target.
-
Configure CHAP authentication on the iSCSI-Initiator:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> --op update -n node.session.auth.username --value <username>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<IQN>
— IQNs of the first and second iSCSI target. You can look at the control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section Disk parameters for iSCSI connection → field Target Name;<username>
— user name to authorize the iSCSI initiator. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section CHAP authentication → field Username;<password>
— password to authorize the iSCSI initiator. You can look in control panels under Servers and hardware → Network disks and storage → tab Network disks → disk page → tab Connecting to the server → section CHAP authentication → field Password.
-
Authorize on the iSCSI target through iSCSI interfaces:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>Specify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Verify that the iSCSI session for each iSCSI target has started:
iscsiadm -m session
Two active iSCSI sessions will appear in the response. For example:
tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)Here.
[1]
и[3]
— iSCSI session numbers. -
To have the disks mount automatically on reboot, set the node.startup iSCSI-sessions setting to automatic:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
systemctl enable iscsid.service
systemctl restart iscsid.serviceSpecify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target.
-
Optional: reboot the server.
4. Check MPIO settings
MPIO — Multipath I/O to improve the fault tolerance of data transfer to the network disk.
MPIO is configured by default. Check if the settings are correct.
Ubuntu
-
Open the utility configuration file
Device Mapper Multipath
with the vi text editor:vi /etc/multipath.conf
-
Make sure that the file
/etc/multipath.conf
contains only the following lines:defaults {
user_friendly_names yes
} -
Open the file
bindings
with the vi text editor:vi /etc/multipath/bindings
-
Make sure that the WWID information of the block device is in the file. For example:
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpatha 3600140530fab7e779fa41038a0a08f8e -
Open the file
wwids
with the vi text editor:vi /etc/multipath/wwids
-
Make sure that the WWID information of the block device is in the file. For example:
vi /etc/multipath/wwids
-
Check the network disk connection and make sure that the policy parameter is set to service-time 0:
multipath -ll
The command output will display information about devices, paths, and current policy. For example:
mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=10 status=active
| `- 8:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 9:0:0:0 sdd 8:48 active ready running