It's Raining VMs

Anshul Sadana, SVP Customer Engineering, Arista Networks

Anshul Sadana, SVP Customer Engineering, Arista Networks

There is something magical about having your apps provisioned within seconds in your data center or a collection of data centers, a.k.a. the cloud. We have come a long way, from waiting for months for the internal IT team to buy a server and then install it, to a fully automated provisioning system where a user can log in and self-provision. While hypervisor virtualization options have existed for many years, automation of the entire stack has taken a decade. It isn’t simply the advent of new technology but the automated world of cloud provisioning that has been in the making for a long time. Server virtualization was the first building block. Most enterprises use VMware ESX while the large public cloud providers built their own stack.

"Virtualization at the server level, network, services and hybrid cloud infrastructure are necessary building blocks for an all automated cloud-like solution"

It was circa 2009 that the world recognized that there was a need to scale the data center to a multi-tenant enterprise-class data center. Hosting providers were using simple VLAN based segmentation of tenants and being limited to 4,000 tenants per zone, as that’s the maximum number of VLANs supported on standard networks. In addition, connecting multiple racks together for the same tenant required Layer2 adjacency between these racks of infrastructure, forcing network limitations to two large aggregation switches fighting to survive the world of Spanning Tree to avoid loops. By 2012, a new standard called VXLAN had been designed. VXLAN allows IT teams to build the network as resilient, N-way redundant, using standard protocols such as OSPF or BGP. The application traffic runs in an overlay, allowing tenants to be spread across racks, providing the scale necessary beyond just one rack or VLAN. This allows large enterprises to mimic the cloud and deliver a large scalable cluster of machines running the right business apps – virtualized or physical. However for fully automated provisioning, virtualized machines are certainly the better approach as they force an abstraction away from any hard binding to a specific server.

Today, compute and storage are connected using overlay techniques; however we have not reached the end point. The world of an automated cloud, public or private, continues to evolve at a rapid pace. First of all, VXLAN can run over any IP network but the end points need to support VXLAN. These end points are the hypervisor/vSwitch, or the physical top-of-rack switch for non-virtualized or legacy assets. In addition, the MAC learning mechanism now needs adjustment to work over a Layer 3 network. This comes from additional intelligence in switches and the controller, synchronizing the network tables when a new VM spins up, or there is vMotion activity. Then, there’s the integration into the upstream stack that is used to tie a VM to the actual business application, including billing systems in public clouds. This is the necessary software stack - VXLAN-capable hypervisor, physical switches and a controller to tie them together. Now all initial setup procedures can be fully automated in software and you start living in the cloud! While large cloud customers have been building their own overlay techniques and virtualization stack, most enterprises have relied on VMware for server virtualization. DevOps techniques tied to a controller or APIs is being done by enterprises to get to these internal cloud builds. In addition to provisioning, special attention is being given to automation frameworks for monitoring and maintaining the infrastructure. The new requirement in the cloud is no downtime - ever!

There is strong momentum towards a more common stack that can be leveraged across large public clouds, service providers and enterprises. That fully automated, almost-open source effort is being baked right now. It will likely be a few years before this stack becomes a turn-key solution for large enterprises, but the underlying mechanism of tenant separation using VXLAN and standardized APIs for automation (used by several companies to develop the rest of the stack), are the essence of this solution.

One other area that is evolving in this space is Network Function Virtualization, or NFV. While servers have been virtualized and multi-tenancy achieved using VXLAN and overlays, there is an unmet need to insert other services such as load balancers or security applications on a per-tenant basis. Large enterprises today deploy these services at higher layers of the network, closer to the Internet. For traffic flow within the data center, security is typically deployed by routing traffic between different zones through firewalls. However, these setups were static and needed manual changes to provision these services. With the NFV concepts, dynamic insertion of services will become feasible. This area is still evolving and not as mature at this stage. Lastly, interconnecting from the infrastructure in-house to the public cloud is becoming a hot topic. This makes complete sense – use in-house infrastructure and apps based on your local storage and compliance needs and go to the cloud as needed for DR needs, as well as bursty demand. Such interconnections are complex, as each cloud provider has a different tunneling technique that tunnels the data through the Internet and maintains resiliency across various failure conditions.

In summary, the cloud is growing fast. There are two unique characteristics that come with it - it’s cheap and it stays up all the time! There are several automation techniques that have been developed to stitch the infrastructure and apps together. Virtualization at the server level, network, services and hybrid cloud infrastructure are necessary building blocks for an all automated cloud-like solution. Our teams can leverage these new technologies and deliver a user experience that everyone now expects in an always-on world.

Read Also

Changing The Way We Work

Amol Bargaje, CIO, Jenner & Block

The Great Threat Intelligence Debate

Dan Holden, Director-Security Research, Arbor Networks