Skip to main content
Connect the storage LUN to the server
Last update:

Connect the storage LUN to the server

Once the LUN is connected, the storage will be available on the server as an unpartitioned disk area.

Data exchange between the storage LUN and the server is performed via iSCSI protocol using two independent network interfaces. The LUN acts as an iSCSI target that is connected to the SAN switch, and the server acts as an iSCSI initiator.

Read more about iSCSI connectivity in Selectel's blog article iSCSI: How the protocol for network storage organization works.

You must connect the storage LUN separately to each server.

  1. Make sure that you have requested that the storage LUN connect to this server.
  2. Connect to the server.
  3. Install the iSCSI initiator. If Windows is installed on the server, go to step 4.
  4. Output iSCSI initiator information.
  5. Request parameters for connecting the storage LUN to the server.
  6. Configure the iSCSI connection.
  7. Configure MPIO.

Check the request to connect the storage LUN to the server

Check to see if the ticket When ordering the service, you requested that the storage LUN be connected to this server.

If you have not requested this server connection, file a ticket. In the ticket, specify the UUID or IP address of the server. You can look in control panels under Servers and hardwareServers → server page → tab Operating system → field IP.

Connect to the server

Install the iSCSI initiator

apt-get update && apt-get install open-iscsi multipath-tools

Output iSCSI initiator information

cat /etc/iscsi/initiatorname.iscsi

Request parameters for connecting the storage LUN to the server

Create a ticket. In the ticket, specify the iSCSI initiator information that you received when you output iSCSI initiator information. Request network settings for iSCSI-targets and CHAP authentication settings:

  • The IP addresses of the iSCSI tags that are connected to the SAN switch;
  • IP addresses to be configured on servers to connect to iSCSI targets;
  • user name (login) and password for CHAP authentication — the same pair is used for all servers.

Wait for a Selectel employee to respond to this ticket.

Configure the iSCSI connection

All iSCSI connection settings are saved in the iSCSI initiator folder, directory /var/lib/iscsi.

  1. Open the utility configuration file netplan with the vi text editor:

    vi /etc/netplan/01-netcfg.yaml
  2. Configure two network interfaces on the server. Add IP addresses to the network interfaces connected to the SAN switch to access iSCSI targets:

        <eth_name_1>:
    addresses: [<ip_address_1>/<mask_1>]
    <eth_name_2>:
    addresses: [<ip_address_2>/<mask_2>]

    Specify:

    • <eth_name_1> — name of the first network interface;
    • <eth_name_2> — name of the second network interface;
    • <ip_address_1> — The IP address of the first server network adapter in the segment for iSCSI. You can see it in the ticket;
    • <mask_1> — mask of the first server adapter in the segment for iSCSI. You can see it in the ticket;
    • <ip_address_2> — The IP address of the second server network adapter in the segment for iSCSI. You can see it in the ticket;
    • <mask_2> — subnet mask of the second server adapter in the segment for iSCSI. You can see it in the ticket.
  3. Press the key ESC.

  4. Exit the vi text editor with your changes saved:

    :wq
  5. Apply the configuration:

    netplan apply
  6. Optional: reboot the server.

  7. Check the speed of each interface. It must be at least 10 GBit/sec:

    ethtool <eth_name_1> | grep -i speed
    ethtool <eth_name_2> | grep -i speed

    Specify <eth_name_1> и <eth_name_2> — names of the network interfaces configured in step 2.

  8. If the speed is below 10 Gbps, create a ticket. If the speed is greater than or equal to 10 Gbps, go to step 9.

  9. Verify that the iSCSI target is available:

    ping -c5 <ip_address_1>
    ping -c5 <ip_address_2>

    Specify:

    • <ip_address_1> — The IP address of the first server network adapter in the segment for iSCSI;
    • <ip_address_2> — The IP address of the second server network adapter in the segment for iSCSI.
  10. Create iSCSI interfaces:

    iscsiadm -m iface -I <iscsi_eth_name_1> --op new
    iscsiadm -m iface -I <iscsi_eth_name_2> --op new

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  11. Bind the iSCSI interfaces to the network interfaces configured in step 2:

    iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
    iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>

    Specify:

    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface;
    • <eth_name_1> — the name of the first network interface you configured in step 2;
    • <eth_name_2> — the name of the second network interface you configured in step 2.
  12. Check the availability of the iSCSI target through the iSCSI interfaces:

    iscsiadm -m discovery -t sendtargets -p <ip_address_1> --interface <iscsi_eth_name_1>
    iscsiadm -m discovery -t sendtargets -p <ip_address_2> --interface <iscsi_eth_name_2>

    Specify:

    • <ip_address_1> — The IP address of the first server network adapter in the segment for iSCSI;
    • <ip_address_2> — The IP address of the second server network adapter in the segment for iSCSI;
    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.

    A list of iSCSI tags will appear in the response.

    For example:

    203.0.113.101:3260,1 iqn.2006-08.com.huawei:oceanstor:2100d859825625ee::20000:203.0.113.101
    203.0.113.102:3260,11 iqn.2006-08.com.huawei:oceanstor:2100d859825625ee::1020000:203.0.113.102

    Here:

    • 203.0.113.101:3260 — The IP address of the first server network adapter in the segment for iSCSI;
    • iqn.2006-08.com.huawei:oceanstor:2100d859825625ee::20000:203.0.113.101 — IQN of the first iSCSI target;
    • 203.0.113.102:3260 — The IP address of the second server network adapter in the segment for iSCSI;
    • iqn.2006-08.com.huawei:oceanstor:2100d859825625ee::1020000:203.0.113.102 — IQN of the second iSCSI target.
  13. Copy the IQN of each iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device.

  14. Configure CHAP authentication on the iSCSI initiator:

    iscsiadm --mode node -T <IQN_1> -p <ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
    iscsiadm --mode node -T <IQN_2> -p <ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
    iscsiadm --mode node -T <IQN_1> --op update -n node.session.auth.username --value <username>
    iscsiadm --mode node -T <IQN_2> --op update -n node.session.auth.username --value <username>
    iscsiadm --mode node -T <IQN_1> -p <ip_address_1> --op update -n node.session.auth.password --value <password>
    iscsiadm --mode node -T <IQN_2> -p <ip_address_2> --op update -n node.session.auth.password --value <password>

    Specify:

    • <IQN_1> — IQN of the first iSCSI target;
    • <IQN_2> — IQN of the second iSCSI target;
    • <ip_address_1> — The IP address of the first server network adapter in the segment for iSCSI;
    • <ip_address_2> — The IP address of the second server network adapter in the segment for iSCSI;
    • <username> — user name (login) for CHAP authentication. You can see it in the ticket;
    • <password> — password for CHAP authentication. You can see it in the ticket.
  15. Authorize on the iSCSI target through iSCSI interfaces:

    iscsiadm --mode node -T <IQN_1> -p <ip_address_1> --login --interface <iscsi_eth_name_1>
    iscsiadm --mode node -T <IQN_2> -p <ip_address_2> --login --interface <iscsi_eth_name_2>

    Specify:

    • <IQN_1> — IQN of the first iSCSI target;
    • <IQN_2> — IQN of the second iSCSI target;
    • <ip_address_1> — The IP address of the first server network adapter in the segment for iSCSI;
    • <ip_address_2> — The IP address of the second server network adapter in the segment for iSCSI;
    • <iscsi_eth_name_1> — name of the first iSCSI interface;
    • <iscsi_eth_name_2> — name of the second iSCSI interface.
  16. Verify that the iSCSI session for each iSCSI target has started:

    iscsiadm -m session

    Two active iSCSI sessions will appear in the response. For example:

    tcp: [5] 203.0.113.101:3260,1 iqn.2006-08.com.huawei:oceanstor:2100d859825625ee::20000:203.0.113.101 (non-flash)
    tcp: [6] 203.0.113.102:3260,11 iqn.2006-08.com.huawei:oceanstor:2100d859825625ee::1020000:203.0.113.102 (non-flash)

    Here. [5] и [6] — iSCSI session numbers.

  17. Duplicate an iSCSI session for each iSCSI target:

    iscsiadm -m session -r <session_number_1> --op new
    iscsiadm -m session -r <session_number_2> --op new

    Specify <session_number_1> и <session_number_2> — the iSCSI session numbers that you derived in step 16.

  18. Verify that the SCSI sessions are backed up:

    iscsiadm -m session

    Four active iSCSI sessions will appear in the response.

  19. Make sure that when the server restarts, the settings will be applied:

    iscsiadm -m node --loginall=automatic
    systemctl enable iscsi.service
    systemctl enable iscsid.service
  20. For each target, set up two iSCSI sessions that start automatically when the server reboots:

    iscsiadm --mode node -T <IQN_1> -p <ip_address_1> --op update -n node.session.nr_sessions --value <number_of_sessions>
    iscsiadm --mode node -T <IQN_2> -p <ip_address_2> --op update -n node.session.nr_sessions --value <number_of_sessions>

    Specify:

    • <IQN_1> — IQN of the first iSCSI target;
    • <IQN_2> — IQN of the second iSCSI target;
    • <ip_address_1> — The IP address of the first server network adapter in the segment for iSCSI;
    • <ip_address_2> — The IP address of the second server network adapter in the segment for iSCSI;
    • <number_of_sessions> — number of iSCSI sessions that will be started automatically when the server is rebooted.
  21. Optional: reboot the server.

Customize MPIO

MPIO configuration combines multiple I/O routes between the server and the storage LUN into one.

  1. Open the configuration file of the Device Mapper Multipath utility with the vi text editor:

    vi /etc/multipath.conf
  2. Add the devices section. For the LUN of the Huawei OceanStor Dorado 5000 V6 storage system, we recommend using the parameter values from the example.

    Example of a devices section:

    devices {
    device {
    vendor "HUAWEI"
    product "XSG1"
    path_grouping_policy multibus
    path_checker tur
    prio const
    path_selector "service-time 0"
    failback immediate
    dev_loss_tmo 30
    fast_io_fail_tmo 5
    no_path_retry 15
    }
    }
  3. Exit the vi text editor with your changes saved:

    :wq
  4. Activate and start the service:

    systemctl enable --now multipathd.service
    systemctl status multipathd
  5. Check the availability of the storage LUN:

    multipath -ll

    The response will display a message about the connection topology of the multipath device.