Configuring your networking correctly when deploying vSAN is crucial, that’s why I’ve chosen to focus on vSAN networking specifically as it relates to cluster settings.  There are several things you need to do when configuring vSAN and if you mess up the networking portion, it will give you headaches and create hours of troubleshooting for you to enjoy.  I’ll do my best to lay out the right way to configure networking from the start so you can avoid any issues that might rear it’s head down the road.  Plus, if you don’t configure it correctly in the beginning, it’s possible you won’t even get off the starting line at all, so let’s make sure you have what you need to get it done from the start.

Check the VMware Hardware Compatibility Guide, and check it again!

The VMware Hardware Compatibility Guide (HCG) will really help you understand what physical parts of your architecture will play nicely with ESXi and vSAN.  I recently was at a customer site where they were experiencing issues with their vSAN Health Service.  They kept getting errors that some of the hardware they had wasn’t compatible with VMware’s HCG.  When I looked at it, it seemed that they had an out of date release of their storage controller, and that was causing problems. Check the HCG, and make sure you aren’t using hardware that’s incompatible or out of date, it will save you a lot of heartburn in the future.

Physical Networking Requirements

Having the right networking backend to support vSAN is crucial to the success of the deployment and the health of your vSAN cluster.  Earlier on, I implored you to check the HCG, please do so… can’t reiterate this enough.  Read the last section again…  Ok, now that I’m off my HCG soap box, it’s time to consider what kind of physical networking requirements we are looking at to properly configure vSAN.

Place your hosts in the same subnet

It’s very helpful to have your hosts in the same subnet.  What will that get you?  I thought you would ask… By placing your hosts in the same subnet, you are creating a scenario that will present you with the best networking performance you can get.  I can’t tell you it’s absolutely necessary but I can tell you that it’s something you want to do to make life easier and for better performance all around.

Enable IP Multicast on the Physical Switches

The last customer I worked with was really reluctant to configuring multicast on the network, however it’s perfectly ok to do so and is actually beneficial for vSAN traffic.  The reason multicast should be enabled on the physical switch is because the hosts need to exchange vSAN metadata.  If you wish, you can configure IGMP snooping on the physical switch as well, so that the delivery of the multicast messages only goes through the physical switch ports that are connected to your vSAN hosts.  As a side note, if you have configured multiple vSAN cluster that reside on the same subnet, you can simply give each cluster it’s own multicast address.

 Dedicate Network Bandwidth on a Physical NIC

Ideally, there should be a minimum of 1 Gbps bandwidth for each vSAN cluster you have configured.  There are several ways you can do this:

  • Dedicate 1 GbE NICs for a hybrid host configuration
  • Use a dedicated or shared 10 GbE physical NIC for all-flash configurations
  • Use a dedicated or shared 10 GbE physical NIC for hybrid configurations
  • or Direct all vSAN traffic on a 10 GbE physical NIC that handles other system traffic and use vSphere Network I/O Control (NIOC) on a dvSwitch to save bandwidth for vSAN

Configure a Port Group on a dvSwitch

Take your physical NIC and assign it to the port group as an active uplink.  I’m talking in singular terms here, but best practice would suggest that you NIC team your adapters to have some high availability built in.  To do this you can configure LACP on the physical switch and also configure LAG groups on the vCenter networking.  Make sure that you select a teaming algorithm based on the physical NIC connection you’ve configured back to the physical switch.  Lastly you could, not a requirement, VLAN tag all vSAN traffic by enabling it in the virtual switch.

Configuring vSAN VMkernel Networking

Let’s shift gears and focus on the virtual networking for VMkernel connectivity between hosts.  The following section will give you a step-by-step guide to configuring VMkernel networking for your vSAN cluster.  As a side note, I’m only including the steps you need to configure VMkernel networking on a distributed virtual switch (dvSwitch).  If you find it necessary to use only a standard switch, or your licensing only supports standard switching, see the document at this link.

  1. Open and log into your vSphere Web Client (yes, the web client folks, get used to it!)
  2. Navigate to the Networking view and find the dvSwitch that you want to change.
  3. Select Manage host networking and click Next.
  4. Click on Attached hosts and select form the hosts that are associated with the dvSwitch.
  5. Click Next
  6. Select Manage VMkernel adapters and click Next.
  7. Click New adapter.
  8. In the Select target device page of the Add Networking wizard, select an existing dvPort Group, and click Next.
  9. In the Port properties page, select Virtual SAN traffic, configure the settings for the VMkernel adapter, and enter appropriate values for these required fields:

Network label – Enter a label. For example, Virtual SAN.

VLAN ID – If you are using VLANs to separate vSAN traffic, enter the relevant VLAN ID.

IP Settings – Enter the desired IPv4 network details (DHCP, if applicable).

TCP/IP stack – Select the relevant TCP/IP stack.
Note: In vSphere 6.x, the network service is called Virtual SAN instead of Virtual SAN Traffic.

10. Click Next to review your settings, then click Finish.  You are now ready to enable your vSAN cluster!

The image below will give you an idea of how vSAN VMkernel traffic flows and what it is used for.

 

Screen Shot 2017-05-17 at 2.03.55 PM

 

 

 

 

 

 

 

 

 

 

 

 

 

(This image is a capture from Duncan Epping/Cormac Hogan’s Essential vSAN book)

 

vSAN needs to communicate via ports on every host in the cluster.  To ensure that there is no interruption of vSAN cluster communications, there are certain ports that need to be allowed through the firewall.  Below is a chart of the services that will communicate between hosts in the cluster, the traffic direction, communicating nodes, transport protocol used and ports used.  Be sure that these ports aren’t blocked on the firewall, if they are you will have issue with all of these vSAN services.

Screen Shot 2017-05-17 at 1.37.37 PM

 

 

 

 

 

 

 

 

 

Where to Next?

Later this week, as another post for vSAN Week on vDestination, you will get a post about the vSAN Health Service and what it does.  You will need to do some configuration there and check to make sure all is green and ready to go.  Stay tuned!

Greg W Stuart
Greg is the owner and editor of vDestination.com. He's been a VMware vExpert every year since 2011. Greg enjoys spending time with his wife and 3 kids. He works as a Sr. Consultant at VMware and resides in Northern Virginia, 15 minutes west of Washington DC.

Leave a Reply

Your email address will not be published. Required fields are marked *