Connect a network drive to the server
Network disk — is a scalable external network block storage with triple data replication. Triple replication of disk volumes provides high data integrity. Suitable for rapid scaling of server disk space.
Network disks are available for connection to dedicated servers in the pool MSK-1. You can connect the network disk to dedicated servers of a ready configuration with the tag You can connect network drives and to dedicated servers of custom configuration with an additional 2 × 10 GE network card + connection to a SAN network of 10 Gbps network disks.
If you don't have a network drive, create it and create a SAN for the accessibility zone.
- Connect the network drive to the server in the control panel.
- Connect the network drive to the server in the server OS.
- Check the MPIO settings.
1. Connect the network drive to the server in the control panel
- in control panels from the top menu, press Products and select Dedicated servers.
- Open the server page → tab Network disks.
- Click Connect a network drive.
- Select a network drive.
- Click .
2. Connect the network disk to the server in the server OS
You can connect a network disk to the server manually or with the help of a ready-made script, which is formed in the control panel. The script can be used only on Ubuntu OS.
Connect manually
Connect using a script
Ubuntu
Windows
-
Connect to the server via SSH or through KVM console.
-
Open the utility configuration file
netplan
word processorvi
:vi /etc/netplan/50-cloud-init.yaml
-
On the network interfaces connected to the SAN switch, add IP addresses and write routes to gain access to iSCSI targets:
<eth_name_1>:
addresses:
- <ip_address_1>
routes:
- to: <destination_subnet_1>
via: <next_hop_1>
<eth_name_2>:
addresses:
- <ip_address_2>
routes:
- to: <destination_subnet_2>
via: <next_hop_2>Specify:
<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card;<ip_address_1>
— The IP address of the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;<ip_address_2>
— The IP address of the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;<destination_subnet_1>
— destination subnet for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Destination subnetwork;<destination_subnet_2>
— destination subnet for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Destination subnetwork;<next_hop_1>
— gateway for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway);<next_hop_2>
— gateway for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Exit the text editor
vi
with the changes intact::wq
-
Apply the configuration:
netplan apply
-
Print the information about the network interfaces and verify that they are configured correctly:
ip a
-
Optional: reboot the server.
-
Check the speed of each network interface. It must be at least 10 GBit/sec:
ethtool <eth_name_1> | grep -i speed
ethtool <eth_name_2> | grep -i speedSpecify
<eth_name_1>
и<eth_name_2>
— names of the network interfaces configured in step 3. -
If the speed is below 10 Gbps, file a ticket.
-
Verify that the iSCSI target is available:
ping -c5 <iscsi_target_ip_address_1>
ping -c5 <iscsi_target_ip_address_2>Specify:
<iscsi_target_ip_address_1>
— The IP address of the first iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of iSCSI target 1;<iscsi_target_ip_address_2>
— The IP address of the second iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of the iSCSI target 2.
-
Enter the name of the iSCSI initiator:
vi /etc/iscsi/initiatorname.iscsi
InitiatorName= <initiator_name>Specify
<initiator_name>
— name of the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Initiator name. -
Restart iSCSI:
systemctl restart iscsid.service
systemctl restart multipathd.service -
Create iSCSI interfaces:
iscsiadm -m iface -I <iscsi_eth_name_1> --op new
iscsiadm -m iface -I <iscsi_eth_name_2> --op newSpecify:
<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Bind the iSCSI interfaces to the network interfaces you configured in step 3:
iscsiadm -m iface --interface <iscsi_eth_name_1> --op update -n iface.net_ifacename -v <eth_name_1>
iscsiadm -m iface --interface <iscsi_eth_name_2> --op update -n iface.net_ifacename -v <eth_name_2>Specify:
<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 12;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 12;<eth_name_1>
— the name of the first network interface you configured in step 3;<eth_name_2>
— the name of the second network interface you configured in step 3.
-
Check the availability of the iSCSI target through the iSCSI interfaces:
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_1> --interface <iscsi_eth_name_1>
iscsiadm -m discovery -t sendtargets -p <iscsi_target_ip_address_2> --interface <iscsi_eth_name_2>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— the name of the first iSCSI interface you created in step 13;<iscsi_eth_name_2>
— name of the second iSCSI interface that you created in step 13.
A list of iSCSI tags will appear in the response. For example:
10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target
10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-targetHere:
10.100.1.2:3260
— IP address of the first iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the first iSCSI target. The IQN (iSCSI Qualified Name) is the full unique identifier of the iSCSI device;10.100.1.6:3260
— IP address of the second iSCSI target;iqn.2003-01.com.redhat.iscsi-gw:workshop-target
— IQN of the second iSCSI target.
-
Configure CHAP authentication on the iSCSI-Initiator:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.authmethod --value CHAP
iscsiadm --mode node -T <IQN> --op update -n node.session.auth.username --value <username>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.session.auth.password --value <password>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.session.auth.password --value <password>Specify:
<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<IQN>
— IQNs of the first and second iSCSI target. You can look at the control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Target name;<username>
— user name to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Username;<password>
— password to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Password.
-
Authorize on the iSCSI target through iSCSI interfaces:
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --login --interface <iscsi_eth_name_1>
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --login --interface <iscsi_eth_name_2>Specify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target;<iscsi_eth_name_1>
— name of the first iSCSI interface;<iscsi_eth_name_2>
— name of the second iSCSI interface.
-
Verify that the iSCSI session for each iSCSI target has started:
iscsiadm -m session
Two active iSCSI sessions will appear in the response. For example:
tcp: [1] 10.100.1.2:3260,1 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)
tcp: [3] 10.100.1.6:3260,2 iqn.2003-01.com.redhat.iscsi-gw:workshop-target (non-flash)Here.
[1]
и[3]
— iSCSI session numbers. -
Enable automatic disk mount when the server restarts by setting the parameter
node.startup
to automatic:iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_1> --op update -n node.startup -v automatic
iscsiadm --mode node -T <IQN> -p <iscsi_target_ip_address_2> --op update -n node.startup -v automatic
systemctl enable iscsid.service
systemctl restart iscsid.serviceSpecify:
<IQN>
— IQNs of the first and second iSCSI target;<iscsi_target_ip_address_1>
— IP address of the first iSCSI target;<iscsi_target_ip_address_2>
— IP address of the second iSCSI target.
-
Optional: reboot the server.
If your server is running Hyper-V, the network disk will not work. This is because the disk over an iSCSI connection does not support SCSI-3 Persistent Reservations required for Hyper-V to run in Failover Cluster mode.
-
Connect to the server via SSH or through KVM console.
-
Run PowerShell as an administrator.
-
Print the list of network interfaces:
Get-NetIPInterface
-
On the network interfaces connected to the SAN switch, add IP addresses:
New-NetIPAddress -InterfaceAlias "<eth_name_1>" -IPAddress <ip_address_1> -PrefixLength <mask_1> -DefaultGateway <next_hop_1>
New-NetIPAddress -InterfaceAlias "<eth_name_2>" -IPAddress <ip_address_2> -PrefixLength <mask_2> -DefaultGateway <next_hop_2>Specify:
<eth_name_1>
— the name of the first network interface you received in step 3;<ip_address_1>
— The IP address of the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;<mask_1>
— The destination subnet mask for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Static routes for connecting to iSCSI targets → column Destination subnetwork;<next_hop_1>
— gateway for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway);<eth_name_2>
— the name of the second network interface you received in step 3;<ip_address_2>
— The IP address of the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;<mask_2>
— The destination subnet mask for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Static routes for connecting to iSCSI targets → column Destination subnetwork;<next_hop_2>
— gateway for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Write static routes to gain access to iSCSI targets:
route add <destination_subnet_1> mask <mask_1> <next_hop_1> -p
route add <destination_subnet_2> mask <mask_2> <next_hop_2> -pSpecify:
<destination_subnet_1>
— destination subnet for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Static routes for connecting to iSCSI targets → column Destination subnetwork;<mask_1>
— The destination subnet mask for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Static routes for connecting to iSCSI targets → column Destination subnetwork;<next_hop_1>
— gateway for the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway).<destination_subnet_2>
— destination subnet for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Static routes for connecting to iSCSI targets → column Destination subnetwork;<mask_2>
— The destination subnet mask for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Static routes for connecting to iSCSI targets → column Destination subnetwork;<next_hop_2>
— gateway for the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Next hop (gateway).
-
Verify that the static routes defined in step 5 have been applied:
route print -4
-
Verify that the speed of each interface is at least 10 GBit/sec:
Get-NetAdapter | Where-Object { $_.Name -eq "<eth_name_1>" } | Select-Object -Property Name,LinkSpeed
Get-NetAdapter | Where-Object { $_.Name -eq "<eth_name_2>" } | Select-Object -Property Name,LinkSpeedSpecify
<eth_name_1>
и<eth_name_2>
— names of the network interfaces configured in step 4. -
If the speed is below 10 Gbps, file a ticket.
-
Verify that the iSCSI target is available:
ping <iscsi_target_ip_address_1>
ping <iscsi_target_ip_address_2>Specify:
<iscsi_target_ip_address_1>
— The IP address of the first iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of iSCSI target 1;<iscsi_target_ip_address_2>
— The IP address of the second iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of the iSCSI target 2.
-
Print information about the Microsoft iSCSI Initiator Service:
Get-Service MSiSCSI
The response will display information about the status of the service. For example:
Status Name DisplayName
------ ---- -----------
Running MSiSCSI Microsoft iSCSI Initiator ServiceHere in the field
Status
displays the current status of the service. -
If the Microsoft iSCSI Initiator Service is in status
Stopped
and run it:Start-Service MSiSCSI
-
Enable autorun of the Microsoft iSCSI Initiator Service:
Set-Service -Name MSiSCSI -StartupType Automatic
-
Set the name of the iSCSI initiator:
iscsicli NodeName "<initiator_name>"
Specify
<initiator_name>
— name of the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Initiator name. -
Connect iSCSI target portals:
New-IscsiTargetPortal -TargetPortalAddress <ip_address_portal_1> -TargetPortalPortNumber 3260 -InitiatorPortalAddress <ip_address_1>
New-IscsiTargetPortal -TargetPortalAddress <ip_address_portal_2> -TargetPortalPortNumber 3260 -InitiatorPortalAddress <ip_address_2>Specify:
<iscsi_target_ip_address_1>
— The IP address of the first iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of iSCSI target 1;<ip_address_1>
— The IP address of the first port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address;<iscsi_target_ip_address_2>
— The IP address of the second iSCSI target. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field IP address of the iSCSI target 2;<ip_address_2>
— The IP address of the second port on the network card. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring network interfaces → column Port IP address.
-
Configure authentication on the iSCSI target through iSCSI interfaces:
$iusr="<username>"
$ipasswd="<password>"
$sts=$(Get-IscsiTarget | Select-Object -ExpandProperty NodeAddress)
foreach ($st in $sts) {
$tpaddr=($st -split ":")[-1]
Connect-IscsiTarget -NodeAddress $st -TargetPortalAddress $tpaddr -TargetPortalPortNumber 3260 -IsPersistent $true -AuthenticationType ONEWAYCHAP -ChapUsername $iusr -ChapSecret $ipasswd
}Specify:
<username>
— user name to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Username;<password>
— password to authorize the iSCSI initiator. You can look in control panels: from the top menu, press Products → Dedicated servers → section Network disks and storage → tab Network disks → disk page → block Configuring the iSCSI connection → field Password.
-
Print a list of iSCSI tags:
Get-IscsiTarget
A list of iSCSI tags will appear in the response. For example:
IsConnected NodeAddress PSComputerName
----------- ----------- --------------
True iqn.2001-07.com.ceph:user-target-99999:203.0.113.101
True iqn.2001-07.com.ceph:user-target-0398327:203.0.113.102 -
Make sure that for each iSCSI target the parameter
IsConnected
set valueTrue
. -
Check that the network drive appears in the list of available disks:
Get-Disk | Select-Object Number, FriendlyName, SerialNumber, BusType, OperationalStatus
A list of disks will appear in the response. For example:
Number FriendlyName SerialNumber BusType OperationalStatus
------ ------------ ------------ ------- -----------------
0 Samsung SSD 860 EVO Z3AZNF0N123456 SATA Online
1 WDC WD2003FZEX-00Z4SA0 WD-1234567890 SATA Online
2 Virtual iSCSI Disk 0001-9A8B-CD0E1234 iSCSI Online
3 SanDisk Ultra USB 4C531001230506 USB OnlineHere:
BusType
— disk type;2
— network disk number;OperationalStatus
— the status of the network drive,Offline
orOnline
.
-
If the status of the network drive
Offline
and put it inOnline
:Set-Disk -Number <block_storage_number> -IsOffline $false
Specify
<block_storage_number>
— number of the network drive you obtained in step 18. -
Initialize the network drive:
Initialize-Disk -Number <block_storage_number> -PartitionStyle GPT
Specify
<block_storage_number>
— number of the network drive you obtained in step 18. -
If you are connecting a network drive to the server for the first time, create and format a partition on the network drive:
21.1 Create a partition on the network drive:
New-Partition -DiskNumber <block_storage_number> -UseMaximumSize -AssignDriveLetter
Specify
<block_storage_number>
— number of the network drive you obtained in step 18.21.2 Format the network disk partition to the desired file system:
-
If you are connecting the network disk to only one server, format the network disk partition to the NTFS file system:
Format-Volume -DriveLetter <volume_letter> -FileSystem NTFS -NewFileSystemLabel "<label>"
Specify:
<volume_letter>
— letter of the volume;<label>
— file system (volume) label.
-
If you connect one network disk to two or more servers, you need to use ReFS file system together with CSV (Cluster Shared Volumes) — more details in the following article. Resilient File System (ReFS) overview Microsoft documentation.
-
-
В control panels from the top menu, press Products and select Dedicated servers.
-
Go to the section Network disks and storage → tab Network disks.
-
Open the Network Disk page.
-
In the block Configuring network interfaces tab Ready configuration file.
-
Copy the parameters for the utility configuration file
netplan
. In the parameters you will need to specify:<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card.
-
In the block Configuring the iSCSI connection tab Ready-made script.
-
Copy the text of the iSCSI connection configuration script.
-
Connect to the server via SSH or through KVM console.
-
Open the utility configuration file
netplan
word processorvi
:vi /etc/netplan/50-cloud-init.yaml
-
Paste the parameters you copied in step 5. Specify:
<eth_name_1>
— name of the first network interface, it is configured on the first port of the network card;<eth_name_2>
— name of the second network interface, it is configured on the second port of the network card.
-
Exit the text editor
vi
with the changes intact::wq
-
Create a script file with a text editor
vi
:vi <file_name>
Specify
<file_name>
— filename in the format.sh
. -
Switch to the insertion mode by pressing i.
-
Paste the script text you copied in step 7 into the file.
-
Click Esc.
-
Exit the text editor
vi
with the changes intact::wq
-
Make the script executable:
chmod +x <file_name>
Specify
<file_name>
— the name of the script file you specified in step 12. -
Run the script with arguments:
./<script_name> <eth_name_1> <eth_name_2>
<file_name>
— the name of the script file you specified in step 12;<eth_name_1>
,<eth_name_2>
— the names of the network interfaces on the ports on the network card that you specified in step 10.
3. configure MPIO
MultiPath-IO (MPIO) — Multi-path I/O to improve the fault tolerance of data transfer to a network disk.
Ubuntu
Windows
In Ubuntu OS, MPIO is configured by default, check the settings.
-
Open the utility configuration file
Device Mapper Multipath
word processorvi
:vi /etc/multipath.conf
-
Make sure that the file
/etc/multipath.conf
contains only the following lines:defaults {
user_friendly_names yes
} -
Make sure that in the file
bindings
has information about the WWID of the block device:cat /etc/multipath/bindings
cat /etc/multipath/bindingsThe command output will display information about the WWID of the block device. For example:
# Format:
# alias wwid
#
mpatha 3600140530fab7e779fa41038a0a08f8e -
Make sure that in the file
wwids
has information about the WWID of the block device:cat /etc/multipath/wwids
cat /etc/multipath/wwidsThe command output will display information about the WWID of the block device. For example:
# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/
# Valid WWIDs:
/3600140530fab7e779fa41038a0a08f8e/ -
Check the network drive connection, and make sure that for the
policy
specified valueservice-time 0
:multipath -ll
The command output will display information about devices, paths, and current policy. For example:
mpatha (3600140530fab7e779fa41038a0a08f8e) dm-0 LIO-ORG,TCMU device
size=20G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=10 status=active
| `- 8:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 9:0:0:0 sdd 8:48 active ready running
-
Disable iSCSI sessions:
$session = Get-IscsiSession
-
Install the MPIO components:
Install-WindowsFeature Multipath-IO
-
Turn on the MPIO:
Enable-WindowsOptionalFeature -Online -FeatureName MultiPathIO
-
Get a list of devices that support MPIO:
mpclaim.exe -e
The command output will display devices that support MPIO. For example:
"Target H/W Identifier " Bus Type MPIO-ed ALUA Support
-------------------------------------------------------------------------------
"LIO-ORG TCMU device " iSCSI NO Implicit OnlyHere.
LIO-ORG TCMU device
— network disk ID. -
Enable MPIO support for the network drive:
mpclaim.exe -r -i -d "<block_storage_device>"
Specify
<block_storage_device>
— the network disk ID that you obtained in step 4. Note that the ID must be entered with spaces. -
Check the status of the MPIO:
Get-MPIOAvailableHW
The command output will display the MPIO status for the network drive. For example:
VendorId ProductId IsMultipathed IsSPC3Supported BusType
-------- --------- ------------- --------------- -------
LIO-ORG TCMU device True True iSCSIHere in the field
IsMultipathed
displays the MPIO status. -
Ensure that the MPIO device path accessibility check mechanism is enabled:
(Get-MPIOSetting).PathVerificationState
The command output will display the status of the MPIO device path availability mechanism. For example:
Enabled
-
If the path availability check mechanism for MPIO devices in status
Disabled
turn it on:Set-MPIOSetting -NewPathVerificationState Enabled
-
Associate the volumes on the network disk with logical partitions in the server OS:
iscsicli.exe BindPersistentDevices
-
Allow the server OS to access the contents of the network disk volumes:
iscsicli.exe BindPersistentVolumes
-
Make sure that the network drive is registered as a persistent device in the server OS configuration:
iscsicli.exe ReportPersistentDevices
The response will show information about the network drive as a persistent device. For example:
Persistent Volumes
"D:\"Here.
D:\
— volume on the network drive.