Connect the network disk to a dedicated server with Proxmox OS
Network disks are available for connection to dedicated servers in the MSK-1 pool.You can connect a network disk to dedicated servers of a ready-made configuration with a tag You can also connect network disks to dedicated servers of an arbitrary configuration with an additional 2 × 10 GE NIC + 10 Gbps Network Disk SAN connection.
You can connect the network disk to one or more servers.
- Create a SAN.
- Connect the network drive to the server.
- Connect the network disk to the server in the server OS.
- Configure MPIO.
- Add the disk to ProxmoxVE.
- Optional: connect the network drive to another server.
1. Create a SAN network
- In the Control Panel, on the top menu, click Products and select Dedicated Servers.
- Go to Network Disks and Storage → Network Disks tab.
- Open the disk page → Server Connection tab.
- Click Create SAN.
- Click Add SAN.
- Select an availability zone.
- Enter a subnet or leave the subnet that is generated by default. The subnet must belong to the private address range
10.0.0.0.0/8
,172.16.0.0.0/12
or192.168.0.0.0/16
and must not already be in use in your infrastructure. - Click Create SAN.
2. Connect the network drive to the server
- In the Control Panel, on the top menu, click Products and select Dedicated Servers.
- Go to Network Disks and Storage → Network Disks tab.
- Open the disk page → Server Connection tab.
- In the Server field, click Select.
- Select the server to which the network drive will be connected.
3. Connect the network disk to the server in the server OS
You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.
-
Open the configuration file
/etc/network/interfaces.d/01-san
with thevi
text editor:vi /etc/network/interfaces.d/01-san
-
On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:
auto <eth_name_1>
iface <eth_name_1> inet static
address <ip_address_1>
up ip route add <destination_subnet_1> via <next_hop_1> dev <eth_name_1>
auto <eth_name_2>
iface <eth_name_2> inet static
address <ip_address_2>
up ip route add <destination_subnet_2> via <next_hop_2> dev <eth_name_2>Specify:
<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card;<ip_address_1>
— The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;<ip_address_2>
— The IP address of the second port on the network card. You can view it in control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;<destination_subnet_1>
— the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;<destination_subnet_2>
— The destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;<next_hop_1>
— gateway for the first port on the network card. You can see it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway);<next_hop_2>
— gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Exit the
vi
text editor with your changes saved::wq
-
Apply the configuration by rebooting the network:
systemctl restart networking
-
Print the information about the network interfaces and verify that they are configured correctly:
ip a
-
Optional: reboot the server.
-
Verify that the speed of each interface is at least 10 GBit/sec:
ethtool <eth_name_1> | grep -i speed
ethtool <eth_name_2> | grep -i speedSpecify
<eth_name_1>
and<eth_name_2>
as the names of the network interfaces configured in step 3. -
If the speed is below 10 Gbps, create a ticket.
-
Verify that the iSCSI target is available:
ping -c5 <iscsi_target_ip_address_1>
ping -c5 <iscsi_target_ip_address_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
-
Enter the name of the iSCSI initiator:
vi /etc/iscsi/initiatorname.iscsi
InitiatorName= <initiator_name>Specify
<initiator_name>
— iSCSI initiatorname
. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → Network Disks and Storage → Network Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field. -
Restart iSCSI:
systemctl restart iscsid.service
-
Create iSCSI interfaces:
iscsiadm -m iface -I <iscsi_eth_name_1> --op new
iscsiadm -m iface -I <iscsi_eth_name_2> --op newSpecify:
<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Bind the iSCSI interfaces to the network interfaces you configured in step 3:
iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>Specify:
<iscsi_eth_name_1>
— name of the first iSCSI interface you created in step 13;<iscsi_eth_name_2>
— name of the second iSCSI interface you created in step 13;<eth_name_1>
— the name of the first network interface you configured in step 3;<eth_name_2>
— the name of the second network interface you configured in step 3.
-
Check the availability of the iSCSI target through the iSCSI interfaces:
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface you created in step 13;<iscsi_eth_name_2>
— name of the second iSCSI interface you created in step 13.
A list of iSCSI tags will appear in the response. For example:
10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-targetHere:
10.100.1.2:3260
— IP address of the first iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;10.100.1.6:3260
— IP address of the second iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the second iSCSI target.
-
Configure CHAP authentication on the iSCSI-Initiator:
iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <iqn> --op update -n node.session.auth.username --value <username>
iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>
iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iqn>
— IQNs of the first and second iSCSI target. You can view them in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;<username>
— username to authorize the iSCSI initiator. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;<password>
— password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
-
Authorize on the iSCSI target through iSCSI interfaces:
iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>Specify:
<iqn>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Verify that the iSCSI session for each iSCSI target has started:
iscsiadm -m session
Two active iSCSI sessions will appear in the response. For example:
tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)Here
[1]
and[3]
are the iSCSI session numbers. -
Enable automatic disk mount when the server restarts by setting the
node.startup
parameter to automatic:iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
systemctl enable iscsid.service
systemctl restart iscsid.serviceSpecify:
<iqn>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target.
-
Optional: reboot the server.
4. Customize MPIO
MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.
-
Update the list of packages:
apt update
apt upgrade -
Set multipath:
apt install multipath-tools
-
Open the
/etc/multipath.conf
configuration file with thevi
text editor:vi /etc/multipath.conf
-
Insert the parameters into the configuration file:
defaults {
user_friendly_names yes
find_multipaths yes
}
blacklist {
} -
Exit the
vi
text editor with your changes saved::wq
-
Apply the configuration by restarting multipath:
systemctl restart multipathd
-
Check the network disk connection and make sure that the
policy
parameter is set toservice-time 0
:multipath -ll
The command output will display information about devices, paths, and current policy. For example:
mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=10 status=active
| `- 8:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 9:0:0:0 sdd 8:48 active ready running -
Make sure the
bindings
file has information about the WWID of the block device:cat /etc/multipath/bindings
The command output will display information about the WWID of the block device. For example:
# Format:
# alias wwid
#
mpatha 3600140530fab7e779fa41038a0a08f8e -
Make sure that the
wwids
file has information about the WWID of the block device:cat /etc/multipath/wwids
The command output will display information about the WWID of the block device. For example:
# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/
5. Add ProxmoxVE disk
-
In your browser, open the page:
https://<ip_address>:8006
Specify
<ip_address>
— public IP address of the server. It can be copied in the control panel: in the top menu click Products → Dedicated Servers → Server page → Operating System tab → in the IP field click . -
In the menu on the left, go to Datacenter → Storage.
-
In the Add field, select iSCSI.
-
In the ID field, enter the name of the connection.
-
In the Portal field, enter the IP address of the iSCSI target. You can view it in the Control Panel: from the top menu, click Products → Dedicated Servers → Network Disks and Storage → Network Disks and Storage → Network Disks tab → Disk Page → iSCSI Connection Setup block → iSCSI target IP address field.
-
In the Target field, select the IQN of the iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device.
-
If there is no iSCSI target IQN in the Target field, add it manually:
7.1 Open the
/etc/pve/storage.cfg
configuration file with thevi
text editor:vi /etc/pve/storage.cfg
7.2 Add two connections:
iscsi: <iscsi_target_name_1>
portal <iscsi_target_ip_address_1>
target <iqn>
content none
iscsi: <iscsi_target_name_2>
portal <iscsi_target_ip_address_2>
target <iqn>
content noneSpecify:
<iscsi_target_name_1>
— name of the first connection;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;<iscsi_target_name_2>
— name of the second connection;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;<iqn>
— IQN iSCSI-target. You can look in control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name.
7.3 Exit the
vi
text editor with the changes saved::wq
-
Check the Enabled checkbox.
-
Check the Use LUNs directly checkbox.
-
Click Add.
-
In the menu on the left, go to Datacenter → Storage.
-
Click Add and select LVM.
-
In the ID field, enter the name of the volume.
-
In the Base storage field, select the connection name you specified in step 4.
-
In the Base volume field, select the network drive.
-
In the Volume group field, enter the name of the volume group.
-
Check the Enable checkbox.
-
Check the Shared checkbox.
-
Click Add.