Skip to main content

Connect a network drive to a dedicated Linux server

Last update:

Network disks are available for connection to dedicated servers in the MSK-1 pool.Connect a network disk to dedicated servers:

  • You can view information about server ports in the control panel: from the top menu, click ProductsDedicated ServersServers → Server → Server page → Ports tab;
  • ready configuration with the tag You can connect network disks;
  • custom configuration with optional 2 × 10 GE NIC + 10 Gbps Network Disk SAN connection.

Connect the network drive to the server.

  1. Create a SAN.
  2. Connect the network drive to the server.
  3. Connect the network disk to the server in the server OS.
  4. Configure MPIO.
  5. Optional: connect the network drive to another server.
  6. Prepare the network drive for operation.

1. Create a SAN network

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.
  2. Go to Network Disks and StorageNetwork Disks tab.
  3. Open the SAN tab.
  4. Click Add SAN.
  5. Select an availability zone.
  6. Enter a subnet or leave the subnet that is generated by default.The subnet must belong to the private address range 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 and must not already be in use in your infrastructure.
  7. Click Create SAN.

2. Connect the network drive to the server

  1. In the Control Panel, on the top menu, click Products and select Dedicated Servers.

  2. Go to Network Disks and StorageNetwork Disks tab.

  3. Open the disk page → Server Connection tab.

  4. In the Server field, click Select.

  5. Select the server to which the network drive will be connected.

  6. Click Connect.

  7. If you are connecting a network drive to a server with a private network, configure the network:

    7.1 Select VLAN.

    7.2. enter CIDR.The subnet must belong to the private address range 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 and must not already be in use in your infrastructure.

    7.3 Enter the Next hop 1 and Next hop 2 addresses from the selected private subnet.

    7.4. Click Customize.

3. Connect the network disk to the server in the server OS

You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel.You can use the script only on Ubuntu OS.

You can connect the network drive over a SAN network or a private network.

The process of connecting a network disk in the server OS through a private subnet depends on the number of ports:

  • If the server has only one local port or MC-LAG is configured, use the instructions for a single port;

  • if the server has two local ports, use the instruction for two ports.

    1. Connect to the server via SSH or via KVM console.

    2. Open the netplan utility configuration file with the vi text editor:

      vi /etc/netplan/50-cloud-init.yaml
    3. On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:

          <eth_name_1>:
      addresses:
      - <ip_address_1>
      routes:
      - to: <destination_subnet_1>
      via: <next_hop_1>
      <eth_name_2>:
      addresses:
      - <ip_address_2>
      routes:
      - to: <destination_subnet_2>
      via: <next_hop_2>

      Specify:

      • <eth_name_1> — name of the first network interface, it is configured on the first port of the network card;
      • <ip_address_1> — The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
      • <destination_subnet_1> — the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
      • <next_hop_1> — gateway for the first port on the network card. You can see it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway);
      • <eth_name_2> — name of the second network interface, it is configured on the second port of the network card;
      • <ip_address_2> — The IP address of the second port on the network card. You can view it in control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;
      • <destination_subnet_2> — the destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;
      • <next_hop_2> — gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
    4. Exit the vi text editor with your changes saved:

      :wq
    5. Apply the configuration:

      netplan apply
    6. Print the information about the network interfaces and verify that they are configured correctly:

      ip a
    7. Optional: reboot the server.

    8. Verify that the speed of each interface is at least 10 GBit/sec:

      ethtool <eth_name_1> | grep -i speed
      ethtool <eth_name_2> | grep -i speed

      Specify <eth_name_1> and <eth_name_2> as the names of the network interfaces you configured in step 3.

    9. If the speed is below 10 Gbps, create a ticket.

    10. Verify that the iSCSI target is available:

      ping -c5 <iscsi_target_ip_address_1>
      ping -c5 <iscsi_target_ip_address_2>

      Specify:

      • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
      • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
    11. Enter the name of the iSCSI initiator:

      vi /etc/iscsi/initiatorname.iscsi
      InitiatorName= <initiator_name>

      Specify <initiator_name> — iSCSI initiator name. You can view it in the Control Panel: in the top menu, click ProductsDedicated ServersNetwork Disks and StorageNetwork Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field.

    12. Restart iSCSI:

      systemctl restart iscsid.service
      systemctl restart multipathd.service
    13. Create iSCSI interfaces:

      iscsiadm -m iface -I <iscsi_eth_name_1> --op new
      iscsiadm -m iface -I <iscsi_eth_name_2> --op new

      Specify:

      • <iscsi_eth_name_1> — name of the first iSCSI interface;
      • <iscsi_eth_name_2> — name of the second iSCSI interface.
    14. Bind iSCSI interfaces to network interfaces:

      iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
      iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>

      Specify:

      • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 13;
      • <eth_name_1> — the name of the first network interface you configured in step 3;
      • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 13;
      • <eth_name_2> — the name of the second network interface you configured in step 3.
    15. Check the availability of the iSCSI target through the iSCSI interfaces:

      iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
      iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>

      Specify:

      • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
      • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 13;
      • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;
      • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 13.

      A list of iSCSI tags will appear in the response. For example:

      10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
      10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target

      Here:

      • 10.100.1.2:3260 — IP address of the first iSCSI target;
      • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;
      • 10.100.1.6:3260 — IP address of the second iSCSI target;
      • iqn.2003-01.com.redhat.iscsi-gw:workshop-target — IQN of the second iSCSI target.
    16. Configure CHAP authentication on the iSCSI-Initiator:

      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP

      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP

      iscsiadm --mode node -T <iqn> --op update -n node.session.auth.username --value <username>

      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>

      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>

      Specify:

      • <iqn> — IQNs of the first and second iSCSI target. You can view them in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;
      • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
      • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;
      • <username> — username to authorize the iSCSI initiator. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;
      • <password> — password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
    17. Authorize on the iSCSI target through iSCSI interfaces:

      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>

      Specify:

      • <iqn> — IQNs of the first and second iSCSI target. You can view them in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;
      • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;
      • <iscsi_eth_name_1> — name of the first iSCSI interface you created in step 13;
      • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;
      • <iscsi_eth_name_2> — name of the second iSCSI interface you created in step 13.
    18. Verify that the iSCSI session for each iSCSI target has started:

      iscsiadm -m session

      Two active iSCSI sessions will appear in the response. For example:

      tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
      tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)

      Here [1] and [3] are the iSCSI session numbers.

    19. Enable automatic disk mount when the server restarts by setting the node.startup parameter to automatic:

      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
      iscsiadm --mode node -T <iqn> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
      systemctl enable iscsid.service
      systemctl restart iscsid.service

      Specify:

      • <iqn> — IQNs of the first and second iSCSI target. You can view them in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;
      • <iscsi_target_ip_address_1> — IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;
      • <iscsi_target_ip_address_2> — IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click ProductsDedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1.
    20. Optional: reboot the server.

4. Customize MPIO

MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.

In Ubuntu OS, MPIO is configured by default, check the settings.

  1. Open the configuration file of the Device Mapper Multipath utility with the vi text editor:

    vi /etc/multipath.conf
  2. Make sure that the /etc/multipath.conf file contains only the following lines:

    defaults {
    user_friendly_names yes
    }
  3. Make sure the bindings file has information about the WWID of the block device:

    cat /etc/multipath/bindings

    The response will display the WWID information of the block device.For example:

    # Format:
    # alias wwid
    #
    mpatha 3600140530fab7e779fa41038a0a08f8e
  4. Make sure that the wwids file has information about the WWID of the block device:

    cat /etc/multipath/wwids

    The response will display the WWID information of the block device.For example:

    # Valid WWIDs:
    /3600140530fab7e779fa41038a0a08f8e/
  5. Check the network disk connection and make sure that the policy parameter is set to service-time 0:

    multipath -ll

    The response displays information about devices, paths, and current policy.For example:

    mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
    size=20G features='0' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=10 status=active
    | `- 8:0:0:0 sdc 8:32 active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
    `- 9:0:0:0 sdd 8:48 active ready running

5. Optional: Connect the network drive to another server

  1. Connect the network drive to the server in the control panel.
  2. Connect the network disk to the server in the server OS.
  3. Configure MPIO.

6. Prepare the network drive for operation

You can format the network disk that you connected to the server to the desired file system:

  • A Cluster File System (CFS) is a file system that allows multiple servers (nodes) to simultaneously work with the same data on shared storage.Examples of Cluster File Systems:

    • GFS2 (Global File System 2), more details in the GFS2 Overview article of the official Red Hat documentation;
    • OCFS2 (Oracle Cluster File System 2), more details in the official Oracle Linux documentation.
  • Logical Volume Manager (LVM) is storage virtualization software designed for flexible management of physical storage devices.Read more in the Configuring and managing logical volumes instruction in the official Red Hat documentation;

  • A standard file system such as ext4 or XFS. Note that in read-write mode, such a file system can only be used on one server at a time to avoid data corruption.Clustered file systems are recommended for multiple servers to share access;

  • VMFS (VMware File System) is a clustered file system used by VMware ESXi to store virtual machine files.It supports storage sharing among multiple ESXi hosts.VMFS automatically manages locks — preventing simultaneous changes to virtual machine files, which ensures data integrity.Learn more in the VMware vSphere VMFS manual of the official VMware Storage documentation.