Connect a network drive to the server
Network Disk — is a scalable external network block storage with triple data replication. Triple replication of disk volumes provides high data integrity. Suitable for rapid scaling of server disk space.
Network disks are available for connection to dedicated servers in the MSK-1 pool. You can connect the network disk to dedicated servers of ready configuration with tag You can also connect the network disks to dedicated servers of arbitrary configuration with an additional 2 × 10 GE network card + 10 Gbps Network Disk SAN connection.
If you do not have a network disk, create one and create a SAN for the availability zone.
- Connect the network drive to the server in the control panel.
- Connect the network disk to the server in the server OS.
- Check the MPIO settings.
1. Connect the network drive to the server in the control panel
- In the Control Panel, on the top menu, click Products and select Dedicated Servers.
- Open the server page → Network Disks tab.
- Click Connect Network Disk.
- Select a network drive.
- Click .
2. Connect the network disk to the server in the server OS
You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.
Connect manually
Connect using a script
Ubuntu
Windows
-
Open the
netplan
utility configuration file with thevi
text editor:vi /etc/netplan/50-cloud-init.yaml
-
On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:
<eth_name_1>:
addresses:
- <ip_address_1>
routes:
- to: <destination_subnet_1>
via: <next_hop_1>
<eth_name_2>:
addresses:
- <ip_address_2>
routes:
- to: <destination_subnet_2>
via: <next_hop_2>Specify:
<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card;<ip_address_1>
— The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;<ip_address_2>
— The IP address of the second port on the network card. You can view it in control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;<destination_subnet_1>
— the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;<destination_subnet_2>
— the destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Destination Subnet;<next_hop_1>
— gateway for the first port on the network card. You can see it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring Network Interfaces → column Next hop (gateway);<next_hop_2>
— gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Exit the
vi
text editor with your changes saved::wq
-
Apply the configuration:
netplan apply
-
Print the information about the network interfaces and verify that they are configured correctly:
ip a
-
Optional: reboot the server.
-
Check the speed of each network interface. It must be at least 10 GBit/sec:
ethtool <eth_name_1> | grep -i speed
ethtool <eth_name_2> | grep -i speedSpecify
<eth_name_1>
and<eth_name_2>
as the names of the network interfaces configured in step 3. -
If the speed is below 10 Gbps, create a ticket.
-
Verify that the iSCSI target is available:
ping -c5 <iscsi_target_ip_address_1>
ping -c5 <iscsi_target_ip_address_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
-
Enter the name of the iSCSI initiator:
vi /etc/iscsi/initiatorname.iscsi
InitiatorName= <initiator_name>Specify
<initiator_name>
— iSCSI initiatorname
. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → Network Disks and Storage → Network Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field. -
Restart iSCSI:
systemctl restart iscsid.service
systemctl restart multipathd.service -
Create iSCSI interfaces:
iscsiadm -m iface -I <iscsi_eth_name_1> --op new
iscsiadm -m iface -I <iscsi_eth_name_2> --op newSpecify:
<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Bind the iSCSI interfaces to the network interfaces you configured in step 3:
iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>Specify:
<iscsi_eth_name_1>
— name of the first iSCSI interface you created in step 12;<iscsi_eth_name_2>
— name of the second iSCSI interface you created in step 12;<eth_name_1>
— the name of the first network interface you configured in step 3;<eth_name_2>
— the name of the second network interface you configured in step 3.
-
Check the availability of the iSCSI target through the iSCSI interfaces:
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface you created in step 13;<iscsi_eth_name_2>
— name of the second iSCSI interface you created in step 13.
A list of iSCSI tags will appear in the response. For example:
10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-targetHere:
10.100.1.2:3260
— IP address of the first iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;10.100.1.6:3260
— IP address of the second iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the second iSCSI target.
-
Configure CHAP authentication on the iSCSI-Initiator:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> --op update -n node.session.auth.username --value <username>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<IQN>
— IQNs of the first and second iSCSI target. Can be viewed in control panel: in the top menu click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Target name;<username>
— username to authorize the iSCSI initiator. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;<password>
— password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
-
Authorize on the iSCSI target through iSCSI interfaces:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>Specify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Verify that the iSCSI session for each iSCSI target has started:
iscsiadm -m session
Two active iSCSI sessions will appear in the response. For example:
tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)Here
[1]
and[3]
are the iSCSI session numbers. -
Enable automatic disk mount when the server restarts by setting the
node.startup
parameter to automatic:iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
systemctl enable iscsid.service
systemctl restart iscsid.serviceSpecify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target.
-
Optional: reboot the server.
If your server is running Hyper-V, the network disk will not work. This is because the disk over an iSCSI connection does not support SCSI-3 Persistent Reservations required for Hyper-V to run in Failover Cluster mode.
-
Run PowerShell as an administrator.
-
Print the list of network interfaces:
Get-NetIPInterface
-
On the network interfaces connected to the SAN switch, add IP addresses:
New-NetIPAddress -InterfaceAlias "<eth_name_1>" -IPAddress <ip_address_1> -PrefixLength <mask_1> -DefaultGateway <next_hop_1>
New-NetIPAddress -InterfaceAlias "<eth_name_2>" -IPAddress <ip_address_2> -PrefixLength <mask_2> -DefaultGateway <next_hop_2>Specify:
<eth_name_1>
— the name of the first network interface you received in step 3;<ip_address_1>
— The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;<mask_1>
— The destination subnet mask for the first port on the network card. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;<next_hop_1>
— gateway for the first port on the network card. You can see it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway);<eth_name_2>
— the name of the second network interface you received in step 3;<ip_address_2>
— The IP address of the second port on the network card. You can view it in control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address;<mask_2>
— The destination subnet mask for the second port on the network card. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;<next_hop_2>
— gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Write static routes to gain access to iSCSI targets:
route add <destination_subnet_1> mask <mask_1> <next_hop_1> -p
route add <destination_subnet_2> mask <mask_2> <next_hop_2> -pSpecify:
<destination_subnet_1>
— the destination subnet for the first port on the network card. You can view it in the control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;<mask_1>
— The destination subnet mask for the first port on the network card. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;<next_hop_1>
— gateway for the first port on the network card. You can see it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).<destination_subnet_2>
— the destination subnet for the second port on the network card. You can look in the control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;<mask_2>
— The destination subnet mask for the second port on the network card. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Static routes for connecting to iSCSI targets → column Destination Subnet;<next_hop_2>
— gateway for the second port on the network card. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Verify that the static routes defined in step 5 have been applied:
route print -4
-
Verify that the speed of each interface is at least 10 GBit/sec:
Get-NetAdapter | Where-Object { $_.Name -eq "<eth_name_1>" } | Select-Object -Property Name,LinkSpeed
Get-NetAdapter | Where-Object { $_.Name -eq "<eth_name_2>" } | Select-Object -Property Name,LinkSpeedSpecify
<eth_name_1>
and<eth_name_2>
as the names of the network interfaces configured in step 4. -
If the speed is below 10 Gbps, create a ticket.
-
Verify that the iSCSI target is available:
ping <iscsi_target_ip_address_1>
ping <iscsi_target_ip_address_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2.
-
Print information about the Microsoft iSCSI Initiator Service:
Get-Service MSiSCSI
The response will display information about the status of the service. For example:
Status Name DisplayName
------ ---- -----------
Running MSiSCSI Microsoft iSCSI Initiator ServiceHere, the
Status
field displays the current status of the service. -
If the Microsoft iSCSI Initiator Service is in
Stopped
status, start it:Start-Service MSiSCSI
-
Enable autorun of the Microsoft iSCSI Initiator Service:
Set-Service -Name MSiSCSI -StartupType Automatic
-
Set the name of the iSCSI initiator:
iscsicli NodeName "<initiator_name>"
Specify
<initiator_name>
— iSCSI initiatorname
. You can view it in the Control Panel: in the top menu, click Products → Dedicated Servers → Network Disks and Storage → Network Disks tab → Disk page → iSCSI Connection Setup block → Initiator name field. -
Connect iSCSI target portals:
New-IscsiTargetPortal -TargetPortalAddress <ip_address_portal_1> -TargetPortalPortNumber 3260 -InitiatorPortalAddress <ip_address_1>
New-IscsiTargetPortal -TargetPortalAddress <ip_address_portal_2> -TargetPortalPortNumber 3260 -InitiatorPortalAddress <ip_address_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target. Can be viewed in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 1;<ip_address_1>
— The IP address of the first port on the network card. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring Network Interfaces → column Port IP address;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field IP address of the iSCSI target 2;<ip_address_2>
— The IP address of the second port on the network card. You can view it in control panel: from the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring network interfaces → column Port IP address.
-
Configure authentication on the iSCSI target through iSCSI interfaces:
$iusr="<username>"
$ipasswd="<password>"
$sts=$(Get-IscsiTarget | Select-Object -ExpandProperty NodeAddress)
foreach ($st in $sts) {
$tpaddr=($st -split ":")[-1]
Connect-IscsiTarget -NodeAddress $st -TargetPortalAddress $tpaddr -TargetPortalPortNumber 3260 -IsPersistent $true -AuthenticationType ONEWAYCHAP -ChapUsername $iusr -ChapSecret $ipasswd
}Specify:
<username>
— username to authorize the iSCSI initiator. You can look it up in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Username;<password>
— password for authorization of the iSCSI initiator. You can view it in the control panel: in the top menu, click Products → Dedicated Servers → section Network Disks and Storage → tab Network Disks → disk page → block Configuring an iSCSI connection → field Password.
-
Print a list of iSCSI tags:
Get-IscsiTarget
A list of iSCSI tags will appear in the response. For example:
IsConnected NodeAddress PSComputerName
----------- ----------- --------------
True iqn.2001-07.com.ceph:user-target-99999:203.0.113.101
True iqn.2001-07.com.ceph:user-target-0398327:203.0.113.102 -
Ensure that
IsConnected is
set toTrue
for each iSCSI target. -
Check that the network drive appears in the list of available disks:
Get-Disk | Select-Object Number, FriendlyName, SerialNumber, BusType, OperationalStatus
A list of disks will appear in the response. For example:
Number FriendlyName SerialNumber BusType OperationalStatus
------ ------------ ------------ ------- -----------------
0 Samsung SSD 860 EVO Z3AZNF0N123456 SATA Online
1 WDC WD2003FZEX-00Z4SA0 WD-1234567890 SATA Online
2 Virtual iSCSI Disk 0001-9A8B-CD0E1234 iSCSI Online
3 SanDisk Ultra USB 4C531001230506 USB OnlineHere:
BusType
— disk type;2
— network disk number;OperationalStatus
— The status of the network disk,Offline
orOnline
.
-
If the status of the network drive is
Offline
, change it toOnline
:Set-Disk -Number <block_storage_number> -IsOffline $false
Specify
<block_storage_number>
is the network drive number you obtained in step 18. -
Initialize the network drive:
Initialize-Disk -Number <block_storage_number> -PartitionStyle GPT
Specify
<block_storage_number>
is the network drive number you obtained in step 18. -
If you are connecting a network drive to the server for the first time, create and format a partition on the network drive:
21.1 Create a partition on the network drive:
New-Partition -DiskNumber <block_storage_number> -UseMaximumSize -AssignDriveLetter
Specify
<block_storage_number>
is the network drive number you obtained in step 18.21.2 Format the network disk partition to the desired file system:
-
If you are connecting the network disk to only one server, format the network disk partition to the NTFS file system:
Format-Volume -DriveLetter <volume_letter> -FileSystem NTFS -NewFileSystemLabel "<label>"
Specify:
<volume_letter>
— volume letter;<label>
— label of the file system (volume).
-
If you are connecting a single network drive to two or more servers, you must use the ReFS file system in conjunction with CSV (Cluster Shared Volumes) — see the Resilient File System (ReFS) overview article in the Microsoft documentation for more information.
-
-
In the Control Panel, on the top menu, click Products and select Dedicated Servers.
-
Go to Network Disks and Storage → Network Disks tab.
-
Open the Network Disk page.
-
In the Network Interfaces Configuration block, open the Ready Configuration File tab.
-
Copy the parameters for the
netplan
utility configuration file. In the parameters you will need to specify:<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card.
-
In the iSCSI Connection Setup block, open the Ready Script tab.
-
Copy the text of the iSCSI connection configuration script.
-
Open the
netplan
utility configuration file with thevi
text editor:vi /etc/netplan/50-cloud-init.yaml
-
Paste the parameters you copied in step 5. Specify:
<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card.
-
Exit the
vi
text editor with your changes saved::wq
-
Create a script file with the
vi
text editor:vi <file_name>
Specify
<file_name>
— file name in.sh
format. -
Switch to insert mode by pressing i.
-
Paste the script text you copied in step 7 into the file.
-
Press Esc.
-
Exit the
vi
text editor with your changes saved::wq
-
Make the script executable:
chmod +x <file_name>
Specify
<file_name>
is the name of the script file you specified in step 12. -
Run the script with arguments:
./<script_name> <eth_name_1> <eth_name_2>
<file_name>
— the name of the script file you specified in step 12;<eth_name_1>
,<eth_name_2>
— the names of the network interfaces on the NIC ports that you specified in step 10.
3. Configure MPIO
MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.
Ubuntu
Windows
In Ubuntu OS, MPIO is configured by default, check the settings.
-
Open the configuration file of the
Device Mapper Multipath
utility with thevi
text editor:vi /etc/multipath.conf
-
Make sure that the
/etc/multipath.conf
file contains only the following lines:defaults {
user_friendly_names yes
} -
Make sure the
bindings
file has information about the WWID of the block device:cat /etc/multipath/bindings
cat /etc/multipath/bindingsThe command output will display information about the WWID of the block device. For example:
# Format:
# alias wwid
#
mpatha 3600140530fab7e779fa41038a0a08f8e -
Make sure that the
wwids
file has information about the WWID of the block device:cat /etc/multipath/wwids
cat /etc/multipath/wwidsThe command output will display information about the WWID of the block device. For example:
# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/
# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/ -
Check the network disk connection and make sure that the
policy
parameter is set toservice-time 0
:multipath -ll
The command output will display information about devices, paths, and current policy. For example:
mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=10 status=active
| `- 8:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 9:0:0:0 sdd 8:48 active ready running
-
Disable iSCSI sessions:
$session = Get-IscsiSession
-
Install the MPIO components:
Install-WindowsFeature Multipath-IO
-
Turn on the MPIO:
Enable-WindowsOptionalFeature -Online -FeatureName MultiPathIO
-
Get a list of devices that support MPIO:
mpclaim.exe -e
The command output will display devices that support MPIO. For example:
"Target H/W Identifier " Bus Type MPIO-ed ALUA Support
-------------------------------------------------------------------------------
"LIO-ORG TCMU device " iSCSI NO Implicit OnlyHere
LIO-ORG TCMU device
is the network disk ID. -
Enable MPIO support for the network drive:
mpclaim.exe -r -i -d "<block_storage_device>"
Specify
<block_storage_device>
is the network disk ID you obtained in step 4. Note that the ID must be entered with spaces. -
Check the status of the MPIO:
Get-MPIOAvailableHW
The command output will display the MPIO status for the network drive. For example:
VendorId ProductId IsMultipathed IsSPC3Supported BusType
-------- --------- ------------- --------------- -------
LIO-ORG TCMU device True True iSCSIHere, the
IsMultipathed
field displays the status of the MPIO. -
Ensure that the MPIO device path accessibility check mechanism is enabled:
(Get-MPIOSetting).PathVerificationState
The command output will display the status of the MPIO device path availability mechanism. For example:
Enabled
-
If the MPIO device path accessibility check mechanism is in
Disabled
status, enable it:Set-MPIOSetting -NewPathVerificationState Enabled
-
Associate the volumes on the network disk with logical partitions in the server OS:
iscsicli.exe BindPersistentDevices
-
Allow the server OS to access the contents of the network disk volumes:
iscsicli.exe BindPersistentVolumes
-
Make sure that the network drive is registered as a persistent device in the server OS configuration:
iscsicli.exe ReportPersistentDevices
The response will show information about the network drive as a persistent device. For example:
Persistent Volumes
"D:\"Here,
D:\
is a volume on the network drive.