Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Future services on Janet

101 Aufrufe

Veröffentlicht am

Design implications and what service innovations they might offer in light of the new architecture.

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

Future services on Janet

  1. 1. James Blessing, Deputy Director, Network Architecture Future Services on Janet
  2. 2. Management and Automation
  3. 3. • Ciena MCP • “Manage, Control, Plan” • Formerly “Blue Planet MCP” by Cyan • Point-and-click provisioning around a single region • Netpath provisioning through the core needs to be “stitched” manually • APIs into MCP and the backbone routers could enable automation of this • ”Zero touch provisioning” for NTEs • Native API to allow integrations Automation
  4. 4. • Several talks in the Networkshop47 archives • Ansible, python, other methods… • https://www.jisc.ac.uk/events/networkshop47-09-apr-2019/programme • JiscMail NETWORK-AUTOMATION list • https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=NETWORK-AUTOMATION • #uk_education on networktocode.slack.com • Join at http://slack.networktocode.com Automation: Community
  5. 5. Network Services 5
  6. 6. • Netpaths • Netpath+ services are limited • 10GE/100GE only • 10GE relies on a 10x10GE mux • OTN on the backbone 6500s • Across the backbone nothing other than 100G is “a wavelength” • …and sometimes not even then Layer 2 VPNs
  7. 7. • Dedicated connectivity when VPNs aren’t enough • Microsoft Azure ExpressRoute • In service, several customers • Amazon Web Service (AWS) Direct Connect • Not much demand so far • Google • 300G+ capacity for peering, no use cases • Others? • Let us know Cloud providers
  8. 8. • NOC turnaround is pretty quick. • Is there any requirement for portal-style provisioning? • Complexities at either end that usually require human-to-human contact. • Virtualised networks? • Network research • Multi-site campuses Where next with L2VPNs?
  9. 9. • Private IP networks • Like a layer 2 VPN, but with BGP peerings • Can exchange private address space • Janet routers currently limited to 32 L3VPNs (by license) • Adding more is a ”simple” additional licence • Is there a demand? • LHCONE; Small Cell project. • GEANT MD-VPN Layer 3 VPNs
  10. 10. Virtualised Services
  11. 11. • …spoken about NTE options earlier • Where to virtualise functions? • Core PoPs • Openreach exchanges (Need to be wary of space requirements) Network Function Virtualisation
  12. 12. Enhancing Current Services
  13. 13. • …are we done yet? • If not, what can we do? IPv6
  14. 14. https://stats.labs.apnic.net/ipv6 IPv6
  15. 15. Supporting larger scale data transfers
  16. 16. Our end-to-end performance initiative (e2epi) is helping our members make the most of their Janet connection Focused mainly on larger scale data transfers Typically scientific data such as synchrotron and cryo-EM (DLS), particle physics (LHC), astronomy (SKA), climate (CEDA), genomics, etc. But approaches can be applied more broadly to university and FE scenarios Web site for more info: https://www.jisc.ac.uk/rd/projects/janet- end-to-end-performance-initiative Pointers to workshops and materials Case studies Best practices E2EPI mail list: https://www.jiscmail.ac.uk/cgi- bin/webadmin?A0=E2EPI For community discussion of any issues around getting good end-to-end performance for networked applications
  17. 17. Providing advice to members Use Janet for data transfers, not physical media! 1TB per hour is ~2Gbit/s; 100TB per day needs ~10Gbit/s Firewalls designed for thousands of small flows may not cope well with a small number of very large flows Consider your campus architecture – add a “Science DMZ” or “Research Data Transfer Zone” (RDTZ) to differentiate your science and general purpose “business” traffic Optimise data transfer nodes (DTNs) at your campus edge Measure your network characteristics over time; identify capability and issues We have interacted with 40—50 projects or organisations Janet end-to-end performance initiative e.g., advising on local network architectures, file transfer tools, data transfer node configurations, network performance monitoring Helping to troubleshoot issues Discussing requirements and expectations with researchers and their communities Assisting sites establish network performance monitoring (esp. with perfSONAR)
  18. 18. Science DMZ (aka RDTZ) principles
  19. 19. Four design principles (https://fasterdata.es.net/science-dmz/ ): 1. “A network architecture explicitly designed for high-performance applications, where the science network is distinct from the general purpose network” 2. “The use of dedicated systems for data transfer” 3. "Performance measurement and network testing systems that are regularly used to characterize the network and are available for troubleshooting” (e.g., perfSONAR) 4. “Security policies and enforcement mechanisms that are tailored for high performance science environments” (i.e., lightweight ACLs not stateful firewalls running deep packet inspection)
  20. 20. Dark Fiber Dark Fiber 10GE Dark Fiber 10GE 10G Border Router WAN Science DMZ Switch/Routers Enterprise Border Router/Firewall Site / Campus LAN Project A DTN (building A) Per-project security policy perfSONAR perfSONAR Facility B DTN (building B) Cluster DTN (building C) perfSONARperfSONAR Cluster (building C) fasterdata.es.net Science DMZ architecture example
  21. 21. Measuring network characteristics
  22. 22. • When investigating network throughput issues, having persistent network monitoring is really useful • The Science DMZ model recommends perfSONAR • https://www.perfsonar.net/ • Measure loss, latency, path, jitter, and (periodic) throughput • Open source; install as Linux image or via packages • 1000’s of nodes deployed worldwide • Jisc is involved in perfSONAR development through the GÉANT project • Web or CLI management; hooks for automation (Ansible) • Host it alongside systems of interest, e.g., data transfer systems • Can set up “meshes” between multiple sites • Gives an at-a-glance view of network performance
  23. 23. Performance over time (Durham <> Birmingham)
  24. 24. Using perfSONAR to evaluate Science DMZ • We did some E2EPI work with Southampton Uni around their retrieval of experimental data from Diamond Light Source • Moving 10-40TB of data a few times a year • Researchers were using physical media • Attempts to move data via network had been very poor • Typically 200-300Mbit/s • We advised on optimising connectivity for an internal filestore • Led to researcher being able to copy 10TB of data overnight • Typically able to obtain 2-4Gbit/s using Globus transfer tools • Also ran a pilot DTN on their campus edge (Science DMZ) • perfSONAR enables a comparison between the two approaches
  25. 25. Jisc London pS -> Soton internal 10G filestore Xmas break Throughput peaks at night, falling during the day due to the load on the campus firewall, an resulting packet loss Here the perfSONAR system is behind the Southampton campus firewall, alongside the internal filestore (Though Xmas is a good time to move data!) Throughput and packet loss over a one month period
  26. 26. Jisc London pS -> Soton external 10G DTN Here the perfSONAR system is at the campus edge, alongside the DTN, outside the campu firewall. The DTN can b protected by ACLs. The throughput is now more stable, with no observable packet loss Throughput and packet loss over the same one month period Gives benefit for both the wide area transfers and the campus firewall performance for general applications
  27. 27. Jisc perfSONAR & DTN
  28. 28. • London: https://ps-londhx1.ja.net/toolkit/ • Slough: https://ps-slough-10g.ja.net/toolkit/ • You can freely configure perfSONAR tests against either of these • Smart pscheduler avoids throughput test conflicts We offer two 10G- connected perfSONAR nodes for you to test against: • UK GridPP - https://ps-dash.dev.ja.net/maddash- webui/index.cgi?dashboard=UK%20Mesh%20Config • Now working with GridPP to assist their refresh of ~20 perfSONAR systems (we will have a recommended specification to share) We provide VM-based hosting of meshes for communities, e.g.: • Small form factor Gigabyte Brix platforms, 1GbE, ~£200 per system; can advise on build or sent units to you to test • Useful for FE cases, or to get perfSONAR experience We offer guidance on running perfSONAR on “small nodes”
  29. 29. • This has been requested of Jisc by some members • They want perfSONAR, but want someone else to run it for them • We are exploring this – if interested please get in touch • May be useful for transnational education (TNE) scenarios A managed perfSONAR service? • The development team supports a containerized version • Not generally recommended; bare metal is *preferred* • But may be required for many cloud monitoring scenarios, or for TNE cases where shipping a box may not be practical Potential for container-based perfSONAR • The GÉANT project perfSONAR team uses Ansible to maintain its Performance Management Platform (PMP) perfSONAR systems • We can advise you on this if it’s of interest to you Automation!
  30. 30. • Specified with NVMe SSD; can read/write at 10Gbit/s • Available to member sites for disk-to-disk tests • Co-located with our Slough perfSONAR system • Offers a Globus Connect endpoint (as used in Southampton case) We have deployed a reference DTN in our Slough DC • Allows tests of alternative protocols and tools • e.g., QUIC, TCP-BBR, WDT, … happy to help members with tests here • Can also run one-off iperf tests from this system if required Also have a second experimental DTN in Slough • Important now we have members connected to Janet at 100G. • And of course supports testing at speeds >10G, not just at 100G • Some good 100GbE material in our recent 100GbE workshop: • https://www.jisc.ac.uk/events/100-gigabit-ethernet-networking- workshop-04-jul-2018 Looking at options to offer a 100G DTN and perfSONAR
  31. 31. Working with the GÉANT project
  32. 32. New GÉANT project: January 2019 – December 2022 Approx €120m, of which €50m for fibre IRU sub-project All European national research and education networks (NRENs) take part Provides networking between the NRENs, and network services to the NRENs and their members / customers, such as an-European eduroam Jisc is leading the new technologies and service development work package within the project (WP6) We will be exploring how we can draw on the project outputs to benefit Janet and our members
  33. 33. Task 1: Enabling technologies Task 2: Orchestration / virtualisation / automation White box (inc. P4 programming) QKD, OTFN, (petascale) DTNs, ultra low latency (LoLa) Task 3: Network management and monitoring Consensus building on approaches to automation Self-service portal for provisioning connectivity (e.g. Jisc might explore this for Netpath provisioning) Example takeaways? perfSONAR, WiFiMon (monitoring eduroam performance) Network Manamanet as a Service (NMaaS) Campus Networking as a Service *(CNaaS) Improved automation capabilities Potential to offer some form of managed campus service Availability of useful tools – e.g., perfSONAR, WiFiMon

×