TagsAccessibility Agile Apple ARI Artificial Intelligence BBC Micro BETT Big Data Biomimicry Business Business Models Business Transformation CIO Cloud Computing Collaboration Connected World Data Development Education Enterprise Social Networking Firm of The Future Focus IaaS Identity Security & Risk Management Innovation Linux Management Mobility New ways of working Privacy Regulation Research Risks SaaS Security Service Social Media Social Networking Software Steve Jobs Supply Chain Sustainability Technology Transformation Virtual Avatars Vision VMware Wellbeing @ work Working in IT Zero email
Data Centre networking technologies to become cloudy (part I)
Over the last few years, server virtualisation has enabled greater agility and flexibility through consolidation of resources and sophisticated management capabilities for Data Centres networing technologies.
Server virtualisation vendors quickly realised that long term differentiation lies primarily in the management and automation capabilities of their products rather than pure virtualisation features which overtime become mere “order qualifiers”.
Unfortunately the same cannot be said for Data Centre (DC) networking technologies for the following reasons:
• Networks remain complex to manage due to “hop by hop” configuration requirements which hinders automation efforts and increases the probability of configuration mistakes
• Multi-tenancy capabilities are based on isolated legacy
concepts such as Virtual LAN (VLAN) and Virtual Routing Forwarders (VRF) which can be difficult to manage especially at scale.
• Loop protection mechanisms have been around for two decades and despite many enhancements over the years, they are not capable of meeting the required scale and adaptability demanded by cloud deployments.
• Networking vendors have traditionally focused on speeds and feeds rather than integrating native and sophisticated management capabilities into their products. This was left to 3rd party companies who relied on crude methods such telnet or ssh based CLI for automation (CLI can hardly be considered an API in the cloud era!).
• Inconsistent networking features and management capabilities across physical and Virtual switches leading to increased management overhead and complicated troubleshooting
Data Centre Network Fabrics
To address these shortcomings, a number of initiatives have emerged over the last 2 years from both the open source community as well as commercial vendors. This first post focuses on developments from mainstream vendors who have been investing heavily in the concept of Data Centre (DC) Network Fabrics.
A number of definitions have emerged of late but in simple terms,DC Fabrics can be defined as the pooling of network devices both physical and virtual into a coherent system exhibiting the following desirable characteristics:
• Simplified Management: The entire Fabric can be managed as a single logical entity where advanced RESTful APIs are used to programmatically provision network services internally within the Fabric and from upper orchestration layers.
• Dynamic scalability: Adding or removing physical & virtual switches should be transparent with newly added switches becoming fully aware of the fabric and its services. The topology should remain loop free with all paths available and active without relying on legacy loop prevention protocols such as Spanning Tree (STP).
• Continuous Operations: The Fabric in its entirety (all features across all devices) can cope with both planned and unplanned downtime in a non-disruptive manner enabling 24×7 continuous operations which is essential for Cloud services.
• Virtualisation Aware: The fabric must be able to detect and configure virtual switches automatically while enforcing network policies even after virtual machine mobility takes place.
• Any to Any connectivity: The Fabric should be optimised for any to any connectivity to provide direct and low latency communication. This is important to reflect changes to data centre traffic patterns as the adoption of VM mobility and I/O convergence means increased east-west traffic.
• Service Aware: The Fabric has to adhere to a service integrity model by understanding and protecting the relationship between the different provisioned elements that make up the “end user” service even cross multiple devices. In a Fabric, you should not be able to delete a VRF with active interfaces assigned to it or delete a Layer 3 interface in one switch when hosts connected to other switches are using it as a default gateway!
More holistic Data Centre automation
It is certainly early days and hence the degree of maturity will vary from one vendor to another and depending on which characteristic is under consideration. In addition, the majority of offerings so far are proprietary and will continue to be the case for the foreseeable future making it hard to benchmark the various implementations.
This is however consistent with the early phases of new products development cycle where the rate of innovation is at its highest and competing vendors attempt to establish their solutions/protocols as the industry standard.
The lack of standards however should not distract from the potential benefits of DC Fabrics as they are likely to reduce complexity and cost, increase availability as well as pave the way for more holistic DC automation. If you are looking at upgrading your aging DC network infrastructure or building a new private cloud, you should begin by understanding DC Fabrics offerings from different vendors and assessing their applicability to your particular environment.
When engaging vendors, the focus should be on all of the Fabric characteristics covered earlier including how these vendors can assist you in integrating your new shiny Fabric with what is already in place. It is advisable however to consider building in parallel rather than integrate to maximise the benefits of the new technology and to avoid operating at the lowest common denominator.Tweet