Skip to main content
Connect the network disk to a dedicated server with Proxmox OS
Last update:

Connect the network disk to a dedicated server with Proxmox OS

Network disks are available for connection to dedicated servers in the MSK-1 pool.You can connect a network disk to dedicated servers of a ready-made configuration with a tag You can also connect network disks to dedicated servers of an arbitrary configuration with an additional 2 × 10 GE NIC + 10 Gbps Network Disk SAN connection.

You can connect the network disk to one or more servers.

  1. Create a SAN.
  2. Connect the network drive to the server.
  3. Connect the network disk to the server in the server OS.
  4. Configure MPIO.
  5. Add the disk to ProxmoxVE.
  6. Optional: connect the network drive to another server.

1. Create a SAN network

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.
  2. Go to Network Disks and StorageNetwork Disks tab.
  3. Open the disk page → Server Connection tab.
  4. Click Create SAN.
  5. Click Add SAN.
  6. Select an availability zone.
  7. Enter a subnet or leave the subnet that is generated by default. The subnet must belong to the private address range 10.0.0.0.0/8, 172.16.0.0.0/12 or 192.168.0.0.0/16 and must not already be in use in your infrastructure.
  8. Click Create SAN.

2. Connect the network drive to the server

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.
  2. Go to Network Disks and StorageNetwork Disks tab.
  3. Open the disk page → Server Connection tab.
  4. In the Server field, click Select.
  5. Select the server to which the network drive will be connected.

3. Connect the network disk to the server in the server OS

You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.

  1. Connect to the server via SSH or via KVM console.

  2. Open the configuration file /etc/network/interfaces.d/01-san with the vi text editor:

    vi /etc/network/interfaces.d/01-san
  3. On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:

    auto <eth_name_1>
    iface <eth_name_1> inet static
    address <ip_address_1>
    up ip route add <destination_subnet_1> via <next_hop_1> dev <eth_name_1>

    auto <eth_name_2>
    iface <eth_name_2> inet static
    address <ip_address_2>
    up ip route add <destination_subnet_2> via <next_hop_2> dev <eth_name_2>

    Specify:

    • <eth_name_1> — name of the first network interface, it is configured on the first port of the network card;
    • <eth_name_2> — name of the second network interface, it is configured on the second port of the network card;
    • <ip_address_1> — The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <ip_address_2> — The IP address of the second port on the network card. You can view it in control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <destination_subnet_1> — the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
    • <destination_subnet_2> — The destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
    • <next_hop_1> — gateway for the first port on the network card. You can see it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway);
    • <next_hop_2> — gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
  4. Exit the vi text editor with your changes saved:

    :wq
  5. Apply the configuration by rebooting the network:

    systemctl restart networking
  6. Print the information about the network interfaces and verify that they are configured correctly:

    ip a
  7. Optional: reboot the server.

  8. Verify that the speed of each interface is at least 10 GBit/sec:

    ethtool <eth_name_1> | grep -i speed
    ethtool <eth_name_2> | grep -i speed

    Specify <eth_name_1> and <eth_name_2> as the names of the network interfaces configured in step 3.

  9. If the speed is below 10 Gbps, create a ticket.

  10. Verify that the iSCSI target is available:

    ping -c5 <iscsi_target_ip_address_1>
    ping -c5 <iscsi_target_ip_address_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
  11. Enter the name of the iSCSI initiator:

    vi /etc/iscsi/initiatorname.iscsi
    InitiatorName= <initiator_name>

    Specify <initiator_name> — iSCSI initiator name. You can view it in the control panel: in the top menu, click ProductsDedicated ServersNetwork Disks and StorageNetwork Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field.

  12. Restart iSCSI:

    systemctl restart iscsid.service
  13. Create iSCSI interfaces:

    iscsiadm -m iface -I <iscsi_eth_name_1> --op new
    iscsiadm -m iface -I <iscsi_eth_name_2> --op new

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  14. Bind the iSCSI interfaces to the network interfaces you configured in step 3:

    iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
    iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 13;
    • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 13;
    • <eth_name_1> — the name of the first network interface you configured in step 3;
    • <eth_name_2> — the name of the second network interface you configured in step 3.
  15. Check the availability of the iSCSI target through the iSCSI interfaces:

    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 13;
    • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 13.

    A list of iSCSI tags will appear in the response. For example:

    10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
    10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target

    Here:

    • 10.100.1.2:3260 — IP address of the first iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;
    • 10.100.1.6:3260 — IP address of the second iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the second iSCSI target.
  16. Configure CHAP authentication on the iSCSI-Initiator:

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
    iscsiadm --mode node -T <iqn> --op update -n node.session.auth.username --value <username>
    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>
    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iqn> — IQNs of the first and second iSCSI target. You can view them in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;
    • <username> — username to authorize the iSCSI initiator. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;
    • <password> — password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
  17. Authorize on the iSCSI target through iSCSI interfaces:

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>

    Specify:

    • <iqn> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  18. Verify that the iSCSI session for each iSCSI target has started:

    iscsiadm -m session

    Two active iSCSI sessions will appear in the response. For example:

    tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
    tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)

    Here [1] and [3] are the iSCSI session numbers.

  19. Enable automatic disk mount when the server restarts by setting the node.startup parameter to automatic:

    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
    iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
    systemctl enable iscsid.service
    systemctl restart iscsid.service

    Specify:

    • <iqn> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target.
  20. Optional: reboot the server.

4. Customize MPIO

MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.

  1. Update the list of packages:

    apt update
    apt upgrade
  2. Set multipath:

    apt install multipath-tools
  3. Open the /etc/multipath.conf configuration file with the vi text editor:

    vi /etc/multipath.conf
  4. Insert the parameters into the configuration file:

    defaults {
    user_friendly_names yes
    find_multipaths yes
    }

    blacklist {
    }
  5. Exit the vi text editor with your changes saved:

    :wq
  6. Apply the configuration by restarting multipath:

    systemctl restart multipathd
  7. Check the network disk connection and make sure that the policy parameter is set to service-time 0:

    multipath -ll

    The command output will display information about devices, paths, and current policy. For example:

    mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
    size=20G features='0' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=10 status=active
    | `- 8:0:0:0 sdc 8:32 active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
    `- 9:0:0:0 sdd 8:48 active ready running
  8. Make sure the bindings file has information about the WWID of the block device:

    cat /etc/multipath/bindings

    The command output will display information about the WWID of the block device. For example:

    # Format:
    # alias wwid
    #
    mpatha 3600140530fab7e779fa41038a0a08f8e
  9. Make sure that the wwids file has information about the WWID of the block device:

    cat /etc/multipath/wwids

    The command output will display information about the WWID of the block device. For example:

    # Valid WWIDs:
    /3600140530fab7e779fa41038a0a08f8e/

5. Add ProxmoxVE disk

  1. In your browser, open the page:

    https://<ip_address>:8006

    Specify <ip_address> — public IP address of the server. It can be copied in the control panel: in the top menu click ProductsDedicated Servers → Server page → Operating System tab → in the IP field click .

  2. In the menu on the left, go to DatacenterStorage.

  3. In the Add field, select iSCSI.

  4. In the ID field, enter the name of the connection.

  5. In the Portal field, enter the IP address of the iSCSI target. You can view it in the Control Panel: from the top menu, click ProductsDedicated ServersNetwork Disks and StorageNetwork Disks and StorageNetwork Disks tab → Disk Page → iSCSI Connection Setup block → iSCSI target IP address field.

  6. In the Target field, select the IQN of the iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device.

  7. If there is no iSCSI target IQN in the Target field, add it manually:

    7.1 Open the /etc/pve/storage.cfg configuration file with the vi text editor:

    vi /etc/pve/storage.cfg

    7.2 Add two connections:

    iscsi: <iscsi_target_name_1>
    portal <iscsi_target_ip_address_1>
    target <iqn>
    content none

    iscsi: <iscsi_target_name_2>
    portal <iscsi_target_ip_address_2>
    target <iqn>
    content none

    Specify:

    • <iscsi_target_name_1> — name of the first connection;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
    • <iscsi_target_name_2> — name of the second connection;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;
    • <iqn> — IQN iSCSI-target. You can look in control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name.

    7.3 Exit the vi text editor with the changes saved:

    :wq
  8. Check the Enabled checkbox.

  9. Check the Use LUNs directly checkbox.

  10. Click Add.

  11. In the menu on the left, go to DatacenterStorage.

  12. Click Add and select LVM.

  13. In the ID field, enter the name of the volume.

  14. In the Base storage field, select the connection name you specified in step 4.

  15. In the Base volume field, select the network drive.

  16. In the Volume group field, enter the name of the volume group.

  17. Check the Enable checkbox.

  18. Check the Shared checkbox.

  19. Click Add.

6. Optional: connect the network drive to another server

  1. Connect the network drive to the server in the control panel.
  2. Connect the network disk to the server in the server OS.
  3. Configure MPIO.
  4. Add the disk to ProxmoxVE.