Internet2

close

Sign in has been disabled at this time

We apologize for the inconvenience. Please check back at another time.

Blogs

Now that the Bandwidth Challenge is solved, what are we going to do with it?

Apr 21, 2011, by Doug Howell
Tags: 100 gigabit Ethernet, Advanced Networking, Campus IT, Community-Driven Innovation, imported, Innovation Platform, Internet2 Network, Research & Education Networks

[NOTE: This is a guest blog post from Rod Wilson, Senior Director of External Research at Ciena Corporation. We're very thankful for the time he's taken to share his thoughts with the Internet2 Community on the current challenges and strategies in higher-education optical networking.]

The arrival of 100Gbps long reach transmission and switching has solved a myriad of well-publicized problems for Communications Service Providers including bringing relief to saturated fibre plant suffering from nearly full links. Transporting the aggregate backhaul traffic from an increasingly hyper-connected world is a huge challenge, and 100G offers these folks a 10X bandwidth upgrade over ubiquitous 10G channels deployed today.  Gaining strategic advantage through new capacity in existing fibre plant at reasonable cost is a solution that excites Communications Service Providers.

But what does the availability of 100G mean to Educators and users of advanced research and education networks like Internet2?  The use models are both similar and different. Similar in terms of bringing higher capacity to aggregated connections and increased number of users. But R&E networks are at the same time different in that e-science and high capacity research applications actually require huge single channel capacity. Unlike most commercial networks, R&E networks support institutions and researchers who want and can use 100G lightpaths for  single research applications – something which is now at their disposal on the Internet2 network.

Science applications on High Performance computers are pushing the boundaries of our knowledge and this requires distribution of computing capacities around the globe which can only be achieved with high-capacity, resilient and scalable, switchable next-generation networks. Further, these characteristics remove capacity bottlenecks and expand capabilities. There have been a number of significant proof-point demonstrations over the past 20 months highlighting the significant benefits these huge new capacities can enable for research.  High speed networks are necessary for researchers who explore a future where the network is an enabler to local, regional, national and international research collaboration including high-performance data distribution, next-generation video and virtualized data processing is essential.

In June 2009, SURFnet the Dutch Research network showcased live  research application traffic on a Hamburg to Amsterdam international 100G trial over 1244 Km of fibre.  Last October Canada’s research network CANARIE conducted tele-medical training over a 1300km link between Ottawa and Chicago’s STARlight gigapop. At the annual Supercomputing conference – “SC”, 100G long haul transmission was a feature of the SCInet network in 2009 and 2010 supporting numerous high performance computing capabilities. Also at SuperComputer SC10,  a large group of contributors including Internet2 CANARIE the Dutch Research Consortium, ESNet, NASA, National Oceanic and Atmospheric Administration (NOAA), the Louisiana Optical Networking Initiative (LONI), Northwestern University, US LHCNet, SCinet, Level 3 Communications, Ciena, Cisco, Infinera and Juniper Networks, delivered a groundbreaking 100 Gigabit Ethernet (GigE) circuit between Chicago and New Orleans. 100Gbps of science data was transmitted between Chicago and New Orleans to showcase the kinds of networking technology necessary to support “peta-scale computing”

There are some breakthrough technologies at play here. The incongruity is to communicate more bits, farther for less money.  The characteristic of optical fibre and natural impairments of noise, distortion and dispersion required significant R&D to overcome. Just to benchmark our progress over time…Nobel Laureate Charles Kao achieved 1Mb/s over a 10 meter span in 1966, roughly twenty years later Northern Electric (Nortel) launched a 565 Mb/s  40km reach product, twenty years after that 1000km long reach 40Gb/s technology was showcased and today 1000+km 100Gb/s technology is being deployed by Ciena in Internet2’s network.

Research on high speed Digital Signal Process (DSP’s) and extremely fast algorithms for analogue to digital conversion calculation led to development of coherent receivers. The coherent receiver processor used in Ciena’s ActivFlex 6500 system being used in the new Internet2 network performs 12 trillion operations per second and does four simultaneous 6 bit analogue to digital conversions at a rate of 20 billion samples per second!   When incorporated into high performance transmission systems, these state of the art products require no special dispersion or PMD compensators and can support eighty-eight 50GHz wavelengths of 100 Gb/s. Or it can be mixed and matched in the same fiber alongside 10G and 40G wavelengths.

All of this solves the problem of how to communicate more bits of information farther at significantly lower cost.  That lower cost benefits commercial networks and R&E networks alike, even though their use models tend to diverge from there.

Like so many computer quests these days, in the end it’s all about the App. If Bandwidth is no longer a barrier, what would you do with it?