We have been planning a few large high density virtual desktop deployments lately for some banks and had to think carefully about how to scalability manage the networking. During the process we crystallised our thoughts around where and why Cisco are heading with FEX ports and we like the look of where its heading – its very elegant. It’s a long post but hang in there, there is lot of background that is important
Before we look at how UCS and VMFex is changing the way we look at networks, it’s worth looking back and reminding ourselves how virtualisation changed the network when VMware took over the data centre.
We have been planning a few large high density virtual desktop deployments lately for some banks and had to think carefully about how to scalability manage the networking. During the process we crystallised our thoughts around where and why Cisco are heading with FEX ports and we like the look of where its heading – its very elegant. It’s a long post but hang in there, there is lot of background that is important
Before we look at how UCS and VMFex is changing the way we look at networks, it’s worth looking back and reminding ourselves how virtualisation changed the network when VMware took over the data centre.
The challenges
This new paradigm has brought a new set of challenges. Servers no longer connect to network ports. MAC addresses move from one rack to another. Connectivity can’t be managed end to end because vSwitches are a black box. A set of best practices have evolved to deal with most of these challenges but even today we see virtualised environments suffering from spanning tree errors and network admins unable to get the data they need to troubleshoot an issue with a guest because every time it vMotions, it’s on a new port and doesn’t take counters and other state information with it.
The solutions so far
VMware started down the road to improved virtual networking with the distributed vSwitch. This takes the idea of a vSwitch, and extends it across physical hosts. One switch that a VM can remain connected to as it moves from host to host. Cisco followed hot on their heels with the Cisco Nexus 1000V. This is a fully fledged Cisco switch that can run as a virtual entity across your hosts. Now, your network admin has end to end management, the server guy no longer has to worry about all those network terms and VMs are plugged into the same port no matter what host they are on. The 1000V has been around for a few years and is a mature product, but the VMFex takes a slightly different approach.
VMFex – centralising the virtual network
On our trip down memory lane we talked about extending network capacity without creating administrative overhead by adding port capacity to existing switches. This is the kind of expansion we all love – more stuff without more work! This is the heart of UCS across the board, which I’ll talk about in another blog, and VMFex is a key part of it.
In Cisco parlance a FEX is a fabric extender. Like the line cards and modules we installed in switches of old (and not so old) a FEX adds ports to an existing switch, while still being managed and controlled by that parent switch. More ports, no more management points, only this time it’s not all in one massive chassis. A FEX is a separate unit that can be deployed in another rack, so you can have a physically distributed switch.
Which, if you’ve hung in this far, brings us to the VMFex. This is a virtualised FEX that can be installed into VMware in place of a distributed vSwitch. It has all the benefits, it’s a true network device, VMs now plug into a port that follows them around – in essence we truly model the physical world from a functionality and management point of view. Only now, the port that your VM plugs into is a port of the upstream physical switch, or more accurately the UCS Fabric Interconnect.
Let’s take a moment to consider this. Let’s say you deploy 2 UCS chassis, 8 blades each and all running ESX. We set up VMFex and start pumping in a few hundred VMs. From a management point of view each one of those VMs is
patched directly into the Fabric interconnect at the top of the rack. The logical topology is exactly the same as if you had a massive switch, and a whole bunch of servers plugged into it. Once a guest is connected it’s plugged into a consistent port that remains with it for its lifetime
Shut up and take my money
Before you come rushing into the office and demand we deploy this for you, there are, as always, some key limitations. There are two ways that the physical network can be presented up to the guests, using the vSphere emulation or using direct path I/O. With the former, you’re limited to 56 ports and therefore guests. With direct path I/O this doubles, but critically you lose the ability to oversubscribe memory on your host which will stop most enabling this. Clearly, this isn’t the solution for VDI and other high consolidation environments.
The VMFex is an exciting innovation and it, along with the entire UCS stack, will continue to get better. Watch this space.