Internet2

close
Use Internet2 SiteID

Already have an Internet2 SiteID?
Sign in here.

Internet2 SiteID

Your organization not listed? Create a local account to use Internet2 services.

Create SiteID

Blogs

Internet2 SDN Overlay Network Ready to Support Community Research and Development

Dec 08, 2017, by John Hicks
Tags: Advanced Network Services, Internet2 Member, perfSONAR, Recent Posts

Joe Breen and Aaron Pabst from the Center for High Performance Computing at the University of Utah were interested in figuring out how to create a distributed hop-by-hop instrumentation suite that looked at dynamic “flows of interest,” irrelevant of layer 3 and above protocols.

During the recent SC17 conference in Denver, Colorado, they used the Internet2 SDN overlay network to investigate end-to-end networking troubleshooting techniques using SDNTrace[1] and some perfSONAR features.

SC 17 logo

Some of the requirements of this demonstration included: 

  • Monitoring/injection into the data path based on particular “flows of interest”
  • Ability to dynamically put the daemons on the virtual path
  • Ability to give feedback to a user or program regarding each hop
  • Ability to explore what would be necessary to give feedback, regardless of the layer 3 and above protocols
  • Ability to ask “What really do we care about hop-by-hop?” to validate the virtual path

The Internet2 SDN overlay network consisted of eight SDN switches from Corsa technologies deployed at Seattle, Los Angeles, Salt Lake City, Kansas City, Houston, Cleveland, Atlanta, and New York. Next to each switch was a physical machine hosting virtual servers used for management, injection into the data plane, and other requirements.

The Corsa switches provided a hardware virtualization technology, called Virtual Forwarding Context (VFC), that enabled true separation between different “slices” of the SDN overlay network. This means that multiple virtual networks could run simultaneously on the same hardware using topologies with any combination of the deployment sites. The data plane connections were made through AL2S and a separate private management network connected to all switches and management servers on the control plane per slice.

Figure 1. shows the basic topology of the demonstration. The Corsa switch locations from left to right include Salt Lake City, Cleveland, Kansas City, New York, and Atlanta.  The demonstration tied the Internet2 SDN overlay network to the SC17 SCInet SDN enabled infrastructure on the show floor using layer 2 connections from the Salt Lake City and Kansas Internet2 PoPs.  

Due to unanticipated issues, the demo did not produce all of the expected results. However, from Joe’s perspective, the demo was a great learning and experimentation opportunity. The Corsa equipment did what it was supposed to do and acted as a strong OVS implementation in the wide area network. The distributed switch and VM topology has been very helpful to flesh out some issues in code and thought processes.    

Fortunately, the teardown of SCinet will not mean the teardown of the majority of this experiment’s infrastructure and this work will continue into 2018. The Internet2 SDN overlay network will continue supporting this work and other experiments by the research and education community.

 

[1] Based on work by Deniz Gurkan, Nick Bastin, and students at University of Houston