DCI
Definitions:
Data Center Interconnect:
Data Center Intraconnect:
Cloud Providers, Data Centers, and Internet Exchanges have extreme requirements in terms of network capacity and redundancy. Terabits, rather than gigabits per second, translate into a need for multiple short range 100G+ connections, with a compatibility requirement for constantly evolving best of breed switches and routers. Cloud computing, Big Data, IoT, Social and Web2.0 continue to accelerate and drive an insatiable need for unlimited bandwidth. There is a clear need to increase the speed of the interconnect pipes within data centers over short reach data center optical connectivity and to deliver low cost, high-speed 50/100/200/400/800G interconnects supported by smaller form factor modules.
To keep costs down, the DCI optical networking equipment should be open, scalable, and optimized for simplicity, small footprint, low power consumption and lowest possible latency. Compact Modular DWDM optical platforms were initially designed for point-to-point data center interconnect (DCI), but have now evolved to “sled”-based architectures, where each type of sled can support multi-service (Fibre Channel, 10G, Ethernet), Ethernet-only (10 GbE/40 GbE/100 GbE), or pure photonic functions like amplifiers and ROADMs. Further, this category has now expanded to include optical transceivers designed in accordance with the OIF 400ZR IA, which are expected to have a significant impact on data center interconnect (DCI), data center intra-connect and metro access applications going forward.
Compact modular and pluggable optical platforms account for 30% of total North American optical hardware shipments in the first quarter of 2019, with an anticipated 28% CAGR worldwide between now and 2023, because they offer:
- Modular pay-as-you-grow architecture
- Mix-and-match modular flexibility
- Lower operating costs
- Openness and programmability
- Simplified turn-up and lifecycle management
Figure 1: Compact modular pay-as-you-grow architecture
Pluggable Optics Use Cases: 400ZR & PAM4
Figure 1: Transceiver line card with 400ZR amplified point-to-point
Figure 2: Router switch line card with 400ZR DWDM Interfaces
Figure 3: Router/Switch line card unamplified point-to-point Interface
400ZR and PAM4 are intended for the use cases summarized here. The different 400ZR use cases can be addressed with different package or module implementations.
120 km or less, amplified, point-to-point, DWDM noise limited link
1. There are 2 use cases of amplified point-to-point links (no OADM) identified for 400ZR in Figures 1 and 2. For amplified links the reach is dependent on the OSNR at the receiver (noise limited). The 400ZR targeted reachfor these applications is 80km or more, with PAM4 best used for shorter distances.
2. Unamplified, single wavelength, loss limited link. For an unamplified link as shown in Figure 3, the reach is dependent on the transmit output power, input receive sensitivity, and the channel’s loss characteristics, and is best suited for high capacity rack to rack, row to row or floor to floor connectivity solutions.
DCI Network Considerations
Distance
The farther data centers are located from one another, the longer it takes for data to travel between them. Latency is a measure of this delay and is critical for improving network performance. Selecting the shortest distance possible when mapping out the physical route for a connection will help to reduce latency, as will using the most appropriate fiber connectivity solution to meet the business need.
Capacity
While DCI connections allow different data centers to transfer data and workloads between one another, or within a campus or building, it’s important to remember that not every facility has or needs the same capabilities, and a best fit solution should be debated.
Security
Any time that data leaves the secure confines of a data center or colocation space, all personnel must take special care to ensure it is protected in transit. Data transmitted over a DCI connection needs to adhere to strict encryption protocols and be subject to detailed rules regarding how it can be accessed and utilized.
Options
Most DCI networks are too complex to be managed manually. Automation protocols and open APIs are essential to enabling data to move rapidly between workloads and across applications. By automating key systems, networks can minimize human error and increase speed significantly while also freeing up personnel to focus on more high-value tasks.
Cost
Building a new data center facility is an expensive undertaking, which is why more organizations are turning away from on-premises private data centers and moving toward third-party data center colocation solutions. With DCI networks, companies can effectively pool their existing resources to maximize the capabilities of the data centers at their disposal.