Cisco Silicon One Breaks the 51.2 Tbps Barrier


In December 2019, we made a daring announcement about how we’d ceaselessly change the economics of the web and drive innovation at speeds like nobody had ever seen earlier than. These had been bold claims, and never surprisingly, many individuals took a wait-and-see angle. Since then, we’ve continued to innovate at an more and more quick tempo, main the {industry} with revolutionary options that meet our prospects’ wants.

Right this moment, simply three and a half years after launching Cisco Silicon One™, we’re proud to announce our fourth-generation set of units, the Cisco Silicon One G200 and Cisco Silicon One G202, which we’re sampling to prospects now. Usually, new generations are launched each 18 to 24 months, demonstrating a tempo of innovation that’s two occasions sooner than regular silicon growth.

The Cisco Silicon One G200 provides the advantages of our unified structure and focuses particularly on enhanced Ethernet-based synthetic intelligence/machine studying (AI/ML) and web-scale backbone deployments. The Cisco Silicon One G200 is a 5 nm, 51.2 Tbps, 512 x 112 Gbps serializer-deserializer (SerDes) system. It’s a uniquely programmable, deterministic, low-latency system with superior visibility and management, making it the best alternative for web-scale networks.

The Cisco Silicon One G202 brings comparable advantages to prospects who nonetheless wish to use the 50G SerDes for connecting optics to the swap. It’s a 5 nm, 25.6 Tbps, 512 x 56 Gbps SerDes system with the identical traits because the Cisco Silicon One G200 however with half the efficiency.

To attain the imaginative and prescient of Cisco Silicon One, it was crucial for us to spend money on key applied sciences. Seven years in the past, Cisco started investing in our personal high-speed SerDes growth and realized instantly that as speeds enhance, the {industry} should transfer to analog-to-digital (ADC)-based SerDes. SerDes acts as a elementary constructing block of networking interconnect for high-performance compute and AI deployments. Right this moment, we’re happy to announce our next-generation, ultra-high efficiency, and low-power 112 Gbps ADC SerDes able to ultra-long attain channels supporting 4-meter direct-attach cables (DACs), conventional optics, linear drive optics (LDO), and co-packaged optics (CPO), whereas minimizing silicon die space and energy.

 

Determine 1. Cisco Silicon One product household

 

The Cisco Silicon One G200 and G202 are uniquely positioned within the {industry} with superior options to optimize real-world efficiency of AI/ML workloads—whereas concurrently driving down the associated fee, energy, and latency of the community with important improvements.

The Cisco Silicon One G200 is the best resolution for Ethernet-based AI/ML networks for a number of causes:

~ With the {industry}’s highest radix swap, with 512 x 100GE Ethernet ports on one system, prospects can construct a 32K 400G GPUs AI/ML cluster with a 2-layer community requiring 50% much less optics, 40% fewer switches, and 33% fewer networking layers—drastically decreasing the environmental footprint of the AI/ML cluster. This protects as much as 9 million kWh per 12 months, which in response to the U.S. Environmental Safety Company is equal to greater than 6,000 metric tons of carbon dioxide (CO2e) or burning 7.3 million kilos of coal per 12 months.

~ Superior congestion-aware load balancing strategies allow networks to keep away from conventional congestion occasions.

~ Superior packet-spraying strategies reduce creation of congestion scorching spots within the community.

~ Superior hardware-based link-failure restoration delivers optimum efficiency throughout huge web-scale networks, even within the presence of faults.

 

Determine 2. Advantages of huge radix 51.2 Tbps switches

 

Right here’s a better have a look at a few of our many Cisco Silicon One–associated improvements:

Converged structure

~ Cisco Silicon One supplies one structure that may be deployed throughout buyer networks, from routing roles to web-scale front-end networks to web-scale back-end networks, dramatically decreasing deployment timelines, whereas concurrently minimizing ongoing operations prices by enabling a converged infrastructure.

~ Utilizing a standard software program growth equipment (SDK) and customary Swap Abstraction Interface (SAI) layers, prospects want solely port the Cisco Silicon One surroundings to their community working system (NOS) as soon as and make use of that funding throughout various community roles.

~ Like all our units, the Cisco Silicon One G200 has a big and totally unified packet-buffer optimizing burst-absorption and throughput in giant web-scale networks. This minimizes head-of-line blocking by absorbing bursts as an alternative of the era of precedence movement management.

Optimization throughout the whole worth chain

~ The Cisco Silicon One G200 has as much as two occasions greater radix than different options with 512 Ethernet MACs, enabling prospects to considerably scale back the associated fee, energy, and latency of community deployments by eradicating layers of their community.

~ With our personal internally developed, next-generation, SerDes expertise, the Cisco Silicon One G200 system is able to driving 43 dB bump-to-bump channels that allow co-packaged optics (CPO), linear pluggable objects (LPO), and the usage of 4-meter 26 AWG copper cables, which is properly past IEEE requirements for optimum in-rack connectivity.

~ The Silicon One G200 is over two occasions extra energy environment friendly with two occasions decrease latency in comparison with our already optimized Cisco Silicon One G100 system.

~ The bodily design and format of the system is constructed with a system-first method, permitting prospects to run system followers slower, dramatically lowering system energy draw.

Modern load balancing and fault detection

~ Assist for non-correlated, weighted equal-cost multipath (WECMP) and equal-cost multipath (ECMP) load balancing capabilities with near-ideal traits assist to keep away from hash polarization, even throughout huge networks.

~ Congestion-aware load balancing for stateful ECMP, movement, and flowlet permits optimum community throughput with optimum flow-completion time and job-completion time (JCT).

~ Congestion-aware stateless packet spraying permits close to ultimate JCT through the use of all out there community bandwidth, no matter movement traits.

~ Assist for hardware-based redistribution of packets primarily based on hyperlink failures permits Cisco Silicon One G200 to optimize real-world throughput of huge scale networks.

Superior packet processor

~ The Cisco Silicon One G200 makes use of the {industry}’s first totally customized, P4 programmable parallel packet processor able to launching greater than 435 billion lookups per second. It helps superior options like SRv6 Micro-SID (uSID) at full charge and is extendable with full run-to-completion processing for much more advanced flows. This distinctive packet processing structure permits flexibility with deterministic low latency and energy.

Deep visibility and analytics

~ Programmable processors allow assist for traditional and rising web-scale in-band telemetry requirements offering industry-leading community visibility.

~ Embedded {hardware} analyzers detect microbursts with pre- and post-event logging of temporal movement info, giving community operators the flexibility to research community occasions after the actual fact with {hardware} time visibility.

 

A brand new era of community capabilities

Gone are the times when the {industry} operated in silos. With its one unified structure, Cisco Silicon One erases the exhausting dividing traces which have outlined our {industry} for too lengthy. Clients now not want to fret about architectural variations rooted in previous creativeness and expertise limitations. Right this moment, prospects can deploy Cisco Silicon One in a mess of how throughout their networks.

With the Cisco Silicon One G200 and G202 units, we lengthen the attain of Cisco Silicon One with optimized high-bandwidth units purpose-built for backbone and AI/ML deployments. Clients can get monetary savings by deploying fewer and extra environment friendly units, get pleasure from new deployment topologies with ultra-long-reach SerDes, enhance their AI/ML job efficiency with revolutionary load balancing and fault discovery strategies, and enhance community debuggability with superior telemetry and {hardware} analyzers.

In the event you’ve been watching since we first introduced Cisco Silicon One in December 2019, it’s simple to see that that is just the start. We’re wanting ahead to persevering with to speed up the worth addition for our prospects.

Keep tuned for extra thrilling Cisco Silicon One developments.

 

Be taught extra about

structure, units, and advantages.

 


Further Sources

Learn my first weblog on Silicon One: Constructing AI/ML Networks with Cisco Silicon One

 

 

Share:



Supply hyperlink

Stay in Touch

To follow the best weight loss journeys, success stories and inspirational interviews with the industry's top coaches and specialists. Start changing your life today!

Related Articles