|By Bill Koss||
|April 21, 2014 06:00 AM EDT||
[This post is penned by Plexxi executive Bill Koss]
Two months ago I read an interesting blog post by Greg Ferro titled “Cheap Network Equipment Makes a Better Data Centre.” At Plexxi, I am the person who leads our sales team and I found the post interesting because I think there is a lot of disinformation in the market regarding the cost of procuring networks. Every week it seems I read a report or article on networking margins, port prices, market share, SDN, bare metal, leaf/spine, open source, linux, etc. When I speak to customers, there is an equal amount of confusion and in my view it perpetuates the current state in favor of the incumbents.
As a level set, when a potential client asks: what does Plexxi do? I tell them we build ethernet switches based on merchant silicon; we use Linux for an operating system, we use a photonic inter-connect fabric and distributed controller architecture. Our switches speak ethernet and IP, just like all ethernet switches, but it is the controller and photonic fabric that make our switches different. Together we believe that our technology results in a system that has transformative scale, performance, and efficiency advantages compared to legacy network architectures.
Recently we provided a proposal to a potential client regarding their network. As with Greg Ferro’s project, we provided an all in proposal that included cables, software, transceivers, switches, controllers, accessories, support costs and the fabric interconnect ports. There were two design options based around the size of the interconnect fabric, which most people refer to as the spine or core. I have condensed some of the details for reading ease, but I think this is a transparent description of the proposal regarding (i) the costs of building a Plexxi network and (ii) how it is different with the choice of two different fabric options, much like the blog post from Greg. Here is the summary table of the network design options:
|# 10G Client Ports||$ Per 10G Client Port||Fabric Size||
|Total 1 Year Cost|
In a Plexxi network, we use a controller architecture. Our controller computes efficient photonic forwarding topologies. This type of architecture, often referred to as SDN, provides a number of benefits, but it begins with a fluid pool of capacity within the fabric. The capacity in our photonic fabric can be allocated, reassigned and reconfigured. Without providing a long technical description of how our controller operates, an important concept to understand is we use 100% of the fabric and we do not implement spanning tree or block links. We compute forwarding topologies using multi-flow commodity, graph theory math; that is one of the jobs of Plexxi Control. A Plexxi fabric can be used inside your data center and the use of a photonic fabric means the fabric can be extended to campus and metro area designs. Here are a few points regarding our controller and fabric architecture:
- The photonic fabric is managed as pool of capacity
- Applications and workloads that are important can be assigned fabric capacity
- Capacities can change, evolve and forwarding topologies can be diurnal
- Controller based fabrics are deterministic as opposed to distributed state fabrics that take time to compute state, which may or may not result in an optimal design.
- Controller based fabrics can centrally compute optimal paths and provide fast convergence by pre-computing failure recovery states
The use of a controller with a photonic fabric provides a number of scaling benefits. The most obvious benefit is the linear cost curves in terms of price per client port. The following chart shows the 10G client port cost in scaling from 400 to 3400 ports.
The linearity of the Plexxi architecture in terms of client port cost can also be seen in terms of power and cooling. In a Plexxi architecture, the performance of the network benefits when the controller tries to keep packets in the photonic portion of the network as much as possible, thus limiting silicon switch hops and incurring latency. Uniformed latency and uniformed power consumption per client power are benefits of the Plexxi architecture:
A question we often get is whether a Plexxi network requires a greenfield or can it be deployed in a brownfield. The answer is there are no greenfields. Plexxi networks have been deployed in a number of variations. We have had clients deploy Plexxi in the spine, while leaving the legacy ToR and server connections in place. We have had clients deploy Plexxi as a replacement strategy for the their leaf/spine network, thus collapsing a two tier or three tier network to a single tier. We have had clients deploy Plexxi between data centers providing a single hop, load-balanced fabric between data centers. Traditionally, most Plexxi customers connect our switches to legacy routers and switches using 10G or 40G ports. We have a handful of customers who have extended the Plexxi fabric via DWDM connections to legacy optical transport platforms.
Another question I have been asked is whether or not our photonic fabric is proprietary. The way to think about our photonic fabric is to compare it to the fabric modules found in traditional core and spine switches. What we have done at Plexxi is to take the backplane capacity in spine/core switches found in multi-tier networks and distribute that capacity into each switch. When you add port capacity, you add fabric capacity. Five years ago this type of network design was not possible, only with the advent of the modern controller architecture coupled with low cost, multi-path photonic interconnects has it become possible. The design objective of a Plexxi network is to manage the network as a resource pool that can be correlated with the needs of compute and storage. We believe that networking is entering the era of plenty and that networks built with rich amounts of path diversity are building blocks of the new networks. We believe this because that has been direction of compute and storage. Compute and storage have entered the era of plenty and it is time for the network to enter the era of plenty as well. The era of plenty for networking will be built using a controller architecture because a controller architecture combined with photonics, merchant silicon and Linux, is the best means to deliver the following benefits:
- Simplicity: Single tier photonic network
- High Utilization: Load balanced L2 fabric
- Controller Architecture: Unified view of network fabric
- Uniformed Low Latency: Direct connectivity
- Faster Failure Handling: Pre-computed forward topologies that converge rapidly to target optimum
- Elastic Network Capacity: Large-scale computation and path optimization through Controller enables fluidity of network capacity
- Reduced Cabling: Simplified network element deployment and insertion
[Today's fun fact: The average American/Canadian drinks about 600 sodas a year. Of course, the American version is 72 ounces compared to Canada's 12 ounces.]
Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...
Dec. 4, 2016 08:00 AM EST Reads: 689
The WebRTC Summit New York, to be held June 6-8, 2017, at the Javits Center in New York City, NY, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 20th International Cloud Expo and @ThingsExpo. WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web ...
Dec. 4, 2016 06:45 AM EST Reads: 1,228
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...
Dec. 4, 2016 06:15 AM EST Reads: 6,971
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Dec. 4, 2016 05:30 AM EST Reads: 1,752
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
Dec. 4, 2016 04:30 AM EST Reads: 1,554
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
Dec. 4, 2016 04:00 AM EST Reads: 6,226
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, will share examples from a wide range of industries – includin...
Dec. 4, 2016 03:45 AM EST Reads: 1,570
"We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 4, 2016 02:15 AM EST Reads: 886
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 11:00 PM EST Reads: 971
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data professionals...
Dec. 3, 2016 11:00 PM EST Reads: 4,166
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Dec. 3, 2016 09:30 PM EST Reads: 1,618
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
Dec. 3, 2016 06:15 PM EST Reads: 1,525
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 05:30 PM EST Reads: 4,060
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Dec. 3, 2016 05:15 PM EST Reads: 2,142
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
Dec. 3, 2016 05:15 PM EST Reads: 2,011
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Dec. 3, 2016 03:15 PM EST Reads: 3,232
The Internet of Things (IoT) promises to simplify and streamline our lives by automating routine tasks that distract us from our goals. This promise is based on the ubiquitous deployment of smart, connected devices that link everything from industrial control systems to automobiles to refrigerators. Unfortunately, comparatively few of the devices currently deployed have been developed with an eye toward security, and as the DDoS attacks of late October 2016 have demonstrated, this oversight can ...
Dec. 3, 2016 02:45 PM EST Reads: 734
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Dec. 3, 2016 02:45 PM EST Reads: 587
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to impr...
Dec. 3, 2016 02:15 PM EST Reads: 6,964
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Dec. 3, 2016 02:00 PM EST Reads: 515