Skip to main content
Connect a network drive to the server
Last update:

Connect a network drive to the server

Network Disk — is a scalable external network block storage with triple data replication. Triple replication of disk volumes provides high data integrity. Suitable for rapid scaling of server disk space.

Network disks are available for connection to dedicated servers in the MSK-1 pool. You can connect the network disk to dedicated servers of ready configuration with tag You can also connect the network disks to dedicated servers of arbitrary configuration with an additional 2 × 10 GE network card + 10 Gbps Network Disk SAN connection.

If you do not have a network disk, create one and create a SAN for the availability zone.

  1. Connect the network drive to the server in the control panel.
  2. Connect the network disk to the server in the server OS.
  3. Check the MPIO settings.

1. Connect the network drive to the server in the control panel

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.
  2. Open the server page → Network Disks tab.
  3. Click Connect Network Disk.
  4. Select a network drive.
  5. Click .

2. Connect the network disk to the server in the server OS

You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.

  1. Connect to the server via SSH or via KVM console.

  2. Open the netplan utility configuration file with the vi text editor:

    vi /etc/netplan/50-cloud-init.yaml
  3. On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:

        <eth_name_1>:
    addresses:
    - <ip_address_1>
    routes:
    - to: <destination_subnet_1>
    via: <next_hop_1>
    <eth_name_2>:
    addresses:
    - <ip_address_2>
    routes:
    - to: <destination_subnet_2>
    via: <next_hop_2>

    Specify:

    • <eth_name_1> — name of the first network interface, it is configured on the first port of the network card;
    • <eth_name_2> — name of the second network interface, it is configured on the second port of the network card;
    • <ip_address_1> — The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <ip_address_2> — The IP address of the second port on the network card. You can view it in control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
    • <destination_subnet_1> — the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
    • <destination_subnet_2> — the destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
    • <next_hop_1> — gateway for the first port on the network card. You can see it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring Network Interfaces → column Next hop (gateway);
    • <next_hop_2> — gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
  4. Exit the vi text editor with your changes saved:

    :wq
  5. Apply the configuration:

    netplan apply
  6. Print the information about the network interfaces and verify that they are configured correctly:

    ip a
  7. Optional: reboot the server.

  8. Check the speed of each network interface. It must be at least 10 GBit/sec:

    ethtool <eth_name_1> | grep -i speed
    ethtool <eth_name_2> | grep -i speed

    Specify <eth_name_1> and <eth_name_2> as the names of the network interfaces configured in step 3.

  9. If the speed is below 10 Gbps, create a ticket.

  10. Verify that the iSCSI target is available:

    ping -c5 <iscsi_target_ip_address_1>
    ping -c5 <iscsi_target_ip_address_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
  11. Enter the name of the iSCSI initiator:

    vi /etc/iscsi/initiatorname.iscsi
    InitiatorName= <initiator_name>

    Specify <initiator_name> — iSCSI initiator name. You can view it in the control panel: in the top menu, click ProductsDedicated ServersNetwork Disks and StorageNetwork Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field.

  12. Restart iSCSI:

    systemctl restart iscsid.service
    systemctl restart multipathd.service
  13. Create iSCSI interfaces:

    iscsiadm -m iface -I <iscsi_eth_name_1> --op new
    iscsiadm -m iface -I <iscsi_eth_name_2> --op new

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  14. Bind the iSCSI interfaces to the network interfaces you configured in step 3:

    iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
    iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 12;
    • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 12;
    • <eth_name_1> — the name of the first network interface you configured in step 3;
    • <eth_name_2> — the name of the second network interface you configured in step 3.
  15. Check the availability of the iSCSI target through the iSCSI interfaces:

    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
    iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 13;
    • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 13.

    A list of iSCSI tags will appear in the response. For example:

    10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
    10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target

    Here:

    • 10.100.1.2:3260 — IP address of the first iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;
    • 10.100.1.6:3260 — IP address of the second iSCSI target;
    • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the second iSCSI target.
  16. Configure CHAP authentication on the iSCSI-Initiator:

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP

    iscsiadm --mode node -T <IQN> --op update -n node.session.auth.username --value <username>

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>

    Specify:

    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <IQN> — IQNs of the first and second iSCSI target. Can be viewed in control panel: in the top menu click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;
    • <username> — username to authorize the iSCSI initiator. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;
    • <password> — password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
  17. Authorize on the iSCSI target through iSCSI interfaces:

    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>

    Specify:

    • <IQN> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target;
    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  18. Verify that the iSCSI session for each iSCSI target has started:

    iscsiadm -m session

    Two active iSCSI sessions will appear in the response. For example:

    tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
    tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)

    Here [1] and [3] are the iSCSI session numbers.

  19. Enable automatic disk mount when the server restarts by setting the node.startup parameter to automatic:

    iscsiadm --mode node -T  <IQN> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
    iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
    systemctl enable iscsid.service
    systemctl restart iscsid.service

    Specify:

    • <IQN> — IQNs of the first and second iSCSI target;
    • <iscsi_target_ip_address_1> — IP address of the first iSCSI target;
    • <iscsi_target_ip_address_2> — IP address of the second iSCSI target.
  20. Optional: reboot the server.

3. Configure MPIO

MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.

In Ubuntu OS, MPIO is configured by default, check the settings.

  1. Open the configuration file of the Device Mapper Multipath utility with the vi text editor:

    vi /etc/multipath.conf
  2. Make sure that the /etc/multipath.conf file contains only the following lines:

    defaults {
    user_friendly_names yes
    }
  3. Make sure the bindings file has information about the WWID of the block device:

    cat /etc/multipath/bindings
    cat /etc/multipath/bindings

    The command output will display information about the WWID of the block device. For example:

    # Format:
    # alias wwid
    #
    mpatha 3600140530fab7e779fa41038a0a08f8e
  4. Make sure that the wwids file has information about the WWID of the block device:

    cat /etc/multipath/wwids
    cat /etc/multipath/wwids

    The command output will display information about the WWID of the block device. For example:

    # Valid WWIDs:
    /3600140530fab7e779fa41038a0a08f8e/
    # Valid WWIDs:
    /3600140530fab7e779fa41038a0a08f8e/
  5. Check the network disk connection and make sure that the policy parameter is set to service-time 0:

    multipath -ll

    The command output will display information about devices, paths, and current policy. For example:

    mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
    size=20G features='0' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=10 status=active
    | `- 8:0:0:0 sdc 8:32 active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
    `- 9:0:0:0 sdd 8:48 active ready running