Connect VMware to ISCSI

Connecting VMware Hosts to Synology ISCSI

The Connect VMware to ISCSI ensure the quality of  network storage, including its reliability and performance, we recommend dedicating two or more physical network interfaces to iSCSI traffic between your VMware host and Synology NAS, especially when a 1GbE network port is used. 

We also recommend the use of iSCSI (Host Bus Adapter) HBA Software, because we tested compatibility with the iSCSI HBA Software. When there are multiple network ports, it is necessary to configure iSCSI MPIO (Multipath I / O) to achieve high load and availability balance for iSCSI storage networks. 

Note that there must be one IP address per network interface and link aggregation must not be configured. There are two methods for configuring iSCSI MPIO: via port binding or in a separate domain.

Method 1: Binding Port with Connect VMware to iSCSI

This is the simplest way to configure MPIO, with the following prerequisites: HBA ISCSI software is used. All VMkernel ports used for ISCSI and for Synology NAS ports are on the same local network and on the same IP subnet. 

All VMkernel ports are on the same vSwitch. You can refer to this tutorial to do this. How to Use Port Binding to Configure Multipathing in VMware for Synology NAS For more detailed information about port binding, see the resources below. 

Consideration for using software that binds to the iSCSI port on ESX / ESXi. Configure the iSCSI binding port with several NICs in one vSwitch for VMware ESXi 5.x and 6.0.x. Multipathing Configuration for iSCSI Software Using Port Binding.

Method 2: Separating Domains in Connect VMware to iSCSI

This method must be used in the following scenarios: iSCSI network port targets exist on different local networks and IP subnets. The VMkernel port used for iSCSI is on a different vSwitch. You can refer to this tutorial to do this. 

How to Use iSCSI Targets on VMware ESXi Servers with Multipath Support. Two physical switches are recommended for connectivity between ESX hosts and Synology Storage arrays to prevent single switch failures that can cause blackouts to virtual infrastructure. 

If Jumbo Frames is used, you must set the correct end-toend MTU size, including the ESX server VMkernel port for iSCSI traffic, all physical switch ports where the VMkernel NIC is connected, and the Synology iSCSI interface. After configuring MPIO, you can check network utilization in Resource Monitor on Synology DSM or ESXi host performance in vSphere Client.

Managing Performance Connect VMware to iSCSI

In DSM 6.0, you can see the performance of each iSCSI LUN in Resource Monitor, including throughput, IOPS, and latency. Below is an example of what will be seen when performing in an iSCSI LUN. In general, barriers in iSCSI storage are bound by disk I / O when the hard disk drive is installed, because disk latency is usually higher than network latency. 

If the network latency value returns too high, something might be wrong with the network, and you can check the network environment (such as network cables and Ethernet switches) to identify problems. As for random IOPS workloads, the rule of thumb says the average latency must be less than 20 ms when there are no backup tasks in progress. 

If your latency is higher than 20 ms for a long time, consider increasing performance by adding disks or SSD Caches to iSCSI LUNs. In addition, you can see your workload pattern, including block size and depth queue. This helps you gain insight into the workload of your virtualized environment in Connect VMware to iSCSI.

Discussion: