Professional Documents
Culture Documents
ICPs and DCOs needs are unique, so why should they settle cessible redundant power supplies and connections. In this
for anything less than DCI optimized optical transport gear? case, multiple personalities equals optimal conguration. And
They need something small and efcient today, yet will scale for DCI customers who have committed to blade server archi-
indenitely for tomorrow. They have to t into smaller shared tectures and want a similar transport solution, the FSP 3000 is
co-location facilities, yet they would like the same equipment also available in an ETSI 300mm deep 12RU chassis.
to form the backbone of their network. They want to minimize
the number of inventoried items, yet still maintain exibility FSP 3000 QuadFlex Card
to support new client interfaces as they arise. They want their
network managers to have full visibility of the network, includ- The DCI application requires unprecedented levels of exibility,
ing the transport gear, yet they dont want to have to deal with and the FSP 3000 QuadFlex line card delivers. The single slot
archaic physical network managers. In other words, they need line card offers an astounding 800Gbit/s of capacity, shared
the ADVA FSP 3000 CloudConnect. among 4 client ports and 2 network ports, while drawing only
typically 140 Watts. The 4 client ports are each 100Gbit/s
QSFP28 interfaces supporting 100Gbit/s Ethernet and OTU4
DCIs Sweet Spot protocols, as well as SR4, LR4, CWDM4, and AOC reaches.
With all the hype around 1RU monolithic pizza box style The dual network ports can be congured a variety
transport products, one would think that smaller must be bet- of ways to optimize distance vs bandwidth tradeoff,
ter. But ADVA Optical Networking did not assume anything. as well as grow incrementally from 100Gbit/s to
We worked in close association with Internet Content Provid- 400Gbit/s transport capacity. At the lower end, a
ers (ICPs) and Data Center Operators (DCOs) and asked them single QSFP28 may be populated and a single net-
what was best. Their answer? Something around 4RU. work port used for 100Gbit/s transport over 3500
kilometers! This capacity may be doubled with no
Turns out there are problems with 1RU and monolithic prod- reach penalty. At the high end, all 4 QSFP28 cli-
ucts. 1RU products require too many chassis to manage, as ents are used with both network ports to create
for each chassis, there are redundant power supplies, power a pair of 200Gbit/s channels that can reach up
connections, fan modules, and management interfaces to deal to 800 kilometers. Unique to the ADVA FSP 3000
with. They are a great place to start a transport service, but CloudConnect solution, the line card may also be congured
once bandwidth demand outgrows the chassis capacity, scaling for an in-between 300Gbit/s transport capacity. When thus
becomes awkward. Making an undersized chassis monolithic congured, 3 QSFP28 clients are used in conjunction with a
(xed ports) only compounds the problem, as now a differ- pair of network ports running 150Gbit/s per channel, for dis-
ent chassis variant must be stocked for every port type. And tances up to 1500 kilometers.
as new client types are introduced, existing chassis must be
ripped out and replaced with new ones. A Complete Solution, OLS Included
The ADVA FSP 3000 CloudConnect was designed by and for What good is having a small 1RU transponder chassis, if it
DCI customers. The 4RU chassis is in the Goldilocks zone, as it has to connect to a huge Optical Line Shelf (OLS)? The ADVA
is large enough to both scale and reduce inventory sprawl, but FSP 3000 CloudConnect includes channel multiplexers, optical
still small enough to squeeze ampliers, all the equipment necessary for end-to-end trans-
into available rack space port in most DCI applications. In fact, a key part of the solution
while minimizing co-location is a 1RU channel multiplexer that will grow from a single
leasing costs. Where needed, 100Gbit/s channel all the way to 128 200Gbit/s channels, with-
it functions as a monolithic out the need for unsightly rats
chassis, with the majority of nests of cabling.
commons equipment inte-
grated; but also where need- Another important part of the ADVA FSP 3000 CloudConnect
ed, it acts as a blade chassis, solution is a single slot width EDFA optical amplier that great-
with redundant, removable ly simplies the total DCI solution. All that is needed is simple
shelf controllers, data ports cabling between the mux and amplier. The output of the am-
on line cards, and rear ac- plier is ready for your ber pair.
The Price, Power, Size They Need Today with the Scale They Will Need Tomorrow
While the word Terabits gets thrown around a lot in DCI Filling an Existing C-Band System
these days, the truth is that Terabits is more of an end goal.
For many ICPs and DCOs, todays requirements can be as little The majority of WDM transport networks today operate over
as 100Gbit/s. And just because ICPs and DCOs are growing a 40 channel plan in the C-band on 100 GHz grid spacing.
like crazy does not mean they are willing to waste money, For many customers, they want to move to the latest DCI op-
power, or space. The price, power, and space margins are ra- tical transport technology, but they want to operate over their
zor thin in DCI, and an optical transport solution must deliver existing 40 channel grid. For this customer set, the ADVA
from the rst Gigabit to the last Terabit. The ADVA FSP 3000 FSP 3000 CloudConnect
CloudConnect was designed by ICPs and DCOs to meet their can be congured as 3
needs of today, while being able to scale seamlessly for their stacked 4RU chassis with a
needs of tomorrow. 40CSM channel multiplex-
er and single amplier line
card. This conguration
From that First 100Gbit/s Port provides 8Tbit/s of optical
transport over an existing
The minimum system conguration of the ADVA FSP 3000 40-channel WDM grid.
CloudConnect is a single chassis with a single FSP 3000
QuadFlex line card. Since the line card will support a single Scaling to 25.6Tbit/s
100Gbit/s QSFP28 client feeding into a single 100Gbit/s net- over a Single Fiber Pair
work port, that is the minimum conguration. For customers
who wish to start out with a 400Gbit/s network channel plan, But some ICPs and DCOs
the line card may be fully congured for maximum capacity are building global scale
and throughput, and pay-as-you-grow pricing plans are avail- computing clouds. For these
able, making the transition to 400Gbit/s networks as painless customers, they can nev-
as possible. For customers who have the opposite problem and er have enough transport
are not ready for 100Gbit/s granularity, 10Gbit/s multiplexing bandwidth between data
line cards are available. centers. The performance of
their compute cloud is direct-
ly proportional to the inter-
connection bandwidth, and
they could use 100 Terabits
today if available. For these
customers, the same ADVA
FSP 3000 CloudConnect
solution that begins at a sin-
This minimal conguration is gle 100Gbit/s port can scale
able to grow from that rst all the way to 256x100G cli-
100Gbit/s port to a total of ents, for total of 25.6Tbit/s
28x100Gbit/s client ports. transported over a single
This is sufcient bandwidth ber pair. Whats more, the
to ll up 14x200Gbit/s WDM 25.6Tbit/s ts into a single
channels. An interesting 19 rack, including channel
midway point is 3 line cards multiplexers and ampliers.
and an amplier, providing
1.2Tbit/s of optical transport,
enough to join the Terabit
club.
FSP 3000 Cloud Connect: 100G Clients QuadFlex Line Cards WDM Channels Capacity (Tbit/s) Rack Units
Unprecedented Scalability 1 1 1 0.1 4
4 1 2 0.4 4
12 3 6 1.2 4
28 7 14 2.8 4
80 20 40 8 14
Data Centers have state-of-the-art management software. Network Hypervisor for Abstraction,
Server loads are monitored and optimized, with loads contin- Virtualization and Multi-Tenancy
ually being balanced across all available resources within their
walls. However, what happens when the same is tried between When full optimization of end-to-end multilayer data ows is
Data Centers, outside their walls? A roadblock is hit. Tradition- desired, then a greater level of intelligent control is required.
al network management software is not compatible with Data The DCI transport network must become part of the overall re-
Center software, thus preventing the latter from truly optimiz- sources being optimized, but this is prevented by the complexity
ing end-to-end multilayer packet ows. DCI optimized optical inherent in optical transport. The solution is to add an additional
transport such as the ADVA FSP 3000 CloudConnect removes hypervisor, a transport network hypervisor that abstracts the
that roadblock, allowing the optical transport piece to t ef- entire transport network and presents it to the orchestration
fortlessly and seamlessly into existing management systems, layer in a format that it can understand. The ADVA FSP 3000
allowing global optimization of resource allocation. CloudConnect solution includes a transport network hypervi-
sor with fully open Northbound interfaces. It can control any-
thing, or be controlled by anything. It is not a network controller
Direct APIs to NE for Simple Connectivity itself, but rather a translator that lets anyone or anything query
and control the transport network. It sounds straightforward,
The level of sophistication required by DCI customers varies but the end result is very powerful. For example, now an or-
widely. For simple connectivity of point-to-point networks, of- chestration application attempting to load balance across virtu-
ten direct APIs are the right choice. In this case, network ele- al machines at different geographically dispersed locations can
ments are provisioned using Command Line Interfaces (CLI), include the transport network in its optimization. No longer are
sometimes with scripting routines. One advantage of CLI is that higher layer applications held hostage to the transport network.
it is human readable text, as opposed to encoded SNMP. The
ADVA FSP 3000 CloudConnect CLI is similar to popular rout- Another example of the power of abstracting the transport
er interfaces. If someone can provision a router, then they will network with a network hypervisor is multi-tenancy. A typi-
have no trouble provisioning Terabit transport networks. Basic cal Service Provider will use a common transport network to
Representational State Transfer (REST) interfaces can be used serve multiple clients. Currently, each of these clients may have
here, as well. bandwidth connectivity, but they must go through the Service
Provider for all network management. A multi-tenancy capable
Network-level APIs for Meshed Networks network hypervisor allows each client to have their own virtual
transport network, based upon the real physical assets avail-
When networks move beyond simple point-to-point links, able. Each client sees their own and only their own portion of
and into more complex meshed network elements, then net- the network and can congure it within pre-determined cong-
work-level APIs are required. The aforementioned RESTful inter- uration rules. The Service Provider maintains visibility of the en-
face is one example. Another is Network Conguration Protocol tire network. This greatly facilitates a Service Provider selling as
(NETCONF) and uses an Extensible Markup Language (XML) to many services over a single infrastructure as possible. Custom-
communicate to net- ers who would have
work elements. NET- demanded their own
Network Hypervisor for
CONF is already pop- Abstraction, Virtualization, manageable network
and Multi-Tenancy
Centralized Network Intelligence
ular with routers and provides network-level APIs can now be sold a
REST, NETCONF /
switches. RESTCONF Topo Path Service Alarms
RESTCONF shared network.
YANG
attempts to combine Direct APIs to NE REST, NETCONF,
RESTCONF
SNMP,
WebSockets
Network Hypervisor
for simple connectivity
the best attributes YANG
SNMP,
of both REST and CLI, REST
WebSockets
NETCONF, by using YANG
Open,
ADVA Generic
a RESTful interface Data Model Data Model
ADVA Optical Networking SE At ADVA Optical Networking were creating new opportuni-
Campus Martinsried ties for tomorrows networks, a new vision for a connected
Fraunhoferstrasse 9 a world. Our intelligent telecommunications hardware, software
82152 Martinsried / Munich and services have been deployed by several hundred service
Germany providers and thousands of enterprises. Over the past twen-
ty years, our innovative connectivity solutions have helped to
ADVA Optical Networking North America, Inc. drive our customers networks forward, helped to drive their
5755 Peachtree Industrial Blvd. businesses to new levels of success. We forge close working
Norcross, Georgia 30092 relationships with all our customers. As your trusted partner
USA we ensure that were always ready to exceed your networking
expectations. For more information on our products and our
ADVA Optical Networking Singapore Pte. Ltd. team, please visit us at: www.advaoptical.com.
25 International Business Park
#05106 German Centre
Singapore 609916
info@advaoptical.com
www.advaoptical.com
Version 06 / 2015