Internet2 partners with Ciena on 8.8Tbps upgrade to Optical Network
Well, the cat is out of the bag. Internet2 is going to utilize Ciena's ActiveFlex 6500 platform to light it's new optical network over the coming months. The press release is here.
We're extremely excited about the new possibilities and capabilities this platform will bring to the research and education community. We're going to skyrocket past our current optical capacity and move from roughly 100 Gigabits per second to up to 8.8 Terabits per second. That's 88 ITU Grid DWDM wavelengths, each running at 100Gbps.
Why this matters
For those not keeping score, most carrier backbones today are lit using parallel 10Gbps backbone links. Internet2 typically has two 10GigE links between it's backbone routers for a total of 20Gbps. Some paths have 30Gbps and some have 40Gbps, but you get the idea. This is somewhat inefficient from an optical spectrum standpoint. You have a single pair of fiber between adjacent nodes (say, between a router in Chicago and a router in Kansas City). You light that fiber with optical equipment that can provide a finite number of wavelengths within that fiber. Each wavelength can carry different speeds of connectivity (2.5Gbps, 10Gbps, 40Gbps, 100Gbps). So, there's a practical limit to how much overall bandwidth you can push through that fiber (# of waves * maximum speed per wave). That means there are two ways to up the aggregate bandwidth: add more wavelengths, or speed up the waves you have.
We're doing both. This system has the capability to provide 88 waves, which is more than a two-fold increase over our prior generation optical system. Each wave is also able to carry a 100Gbps signal, a ten-fold increase over our prior generation optical system. When you do the math, that's a huge increase in available bandwidth for the R&E community.
But the trickle down benefits are even greater. Once you have an optical system that can provide 100GigE wavelengths, you can build an IP network atop that system that provides 100Gbps interconnects between the routers. This is important because it gets us away from striping bandwidth across parallel 10G links. In the R&E community this tends to be a bigger win than on a traditional Commodity IP routed network. It comes down to flow distribution between routers. You can think of a flow on the network as a stream of data between any two machines that are talking to each other. A home user might initiate a flow of data to a Netflix server, or a researcher might try and download a 20 Terabyte data set from a server in Switzerland. In both cases, the flow tends to work best when it isn't spread out amongst parallel backbone links. In the home user case, the flows tend to be fairly small: something on the order of 1-3Mbps. In the researchers case, they may have an optimized, well-connected set of end-hosts that can transfer data at 8Gbps.
So, the R&E community can be a bit more demanding in terms of per-flow bandwidth capabilities. What happens when you get several of those researchers trying to transfer at 8Gbps between two backbone routers? The routers need to decide which of the parallel 10G links the data flows go over between any two cities. It's possible that two research flows at 8Gbps get placed onto the same 10Gbps link, and they both suffer reduced bandwidth. Enter 100GigE on the IP network. All of those researchers can easily transmit their 8Gbps flows without fear of congestion due to inefficient flow distribution.
Why we're excited
We're obviously thrilled at the additional capabilities outlined above. As network engineers, we can sometimes get hung up on the network "piping", but the real excitement starts when people start to use it. As many have demonstrated, high bandwidth availability spurs innovation. The real thrill will evolve over the course of installation as we start seeing new advances in high-speed networking and collaboration.