Abstract

We propose a uniform solution for a future client-side 400 G Ethernet standard based on MultiCAP advanced modulation format, intensity modulation, and direct detection. It employs 4 local area networks-wavelength division multiplexing (LAN-WDM) lanes in 1300 nm wavelength band and parallel optics links based on vertical cavity surface emitting lasers (VCSELs) in 850 nm wavelength band. Total bit rate of 432 Gbps is transmitted over unamplified 20 km standard single mode fiber link and over 40 km link with semiconductor optical amplifier. 70.4 Gb/s transmission over 100 m of OM3 multimode fiber using off-the-shelf 850 nm VCSEL with 10.1 GHz 3 dB bandwidth is demonstrated indicating the feasibility of achieving 100 Gb/s per lane with a single 25 GHz VCSEL. In this review paper we introduce and present in one place the benefits of MultiCAP as versatile scheme for use in a number of client-side scenarios: short range, long range, and extended range.

1. Introduction

Ever growing video-rich Ethernet traffic on the client-side optical networks calls for high-speed, cost-effective optical transport data links. Standardization of client-side optical data links is critical to ensure compatibility and interoperability of telecom and datacom equipment from different vendors. As depicted in Figure 1, the IEEE standardization body classifies the client-side links in three categories: short range (SR), long range (LR), and extended range (ER). SR links, which cover up to 100 m, are usually employed in data centers and central offices. LR links cover up to 20 km and are typically used to privately connect buildings of the same company or institution. ER links cover up to 40 km and are typically used to provide connectivity to customer-premises equipment (CPE) and for metro applications. The current work of the 400 Gbps Ethernet Study Group [1] and the wider research community focuses on these three scenarios [2]. An optical intensity modulation/direct detection (IM/DD) link offering 400 Gbps capacity with use of advanced modulation formats is an attractive and easily adaptable solution for client-side links, such as inter- and intradata center interconnects.

The client-side links presented in this work are based on the multiband and multilevel approach to carrierless amplitude phase (CAP) modulation, MultiCAP [3]. In this review paper we demonstrate the flexibility of the IM/DD MultiCAP based solutions for a SR 100 m link [4], a LR 20 km link, and an ER 40 km link [5]. A SR client-side link that achieves error-free 65.7 Gbps over a 100 m multimode fiber (MMF) OM3 using an 850 nm vertical cavity surface-emitting laser (VCSEL) is presented. Furthermore, two IM/DD LAN-WDM 432 Gbps links are described: an unamplified 20 km link for the LR scenario and semiconductor optical amplifier (SOA) based 40 km link for the ER scenario. Four-lane LAN-WDM with 108 Gbps per lane is obtained using 4 externally modulated lasers (EMLs) in the O-band.

Figure 2 summarizes the capacity per lane reported at the considered transmission distances for several modulation formats. The short range (SR) area of Figure 2 shows the highest error-free bit rates achieved for 850 nm vertical cavity surface-emitting laser (VCSEL) based links. Bit rate of 70 Gbps over 2 m OM4 MMF was achieved using 4-level pulse amplitude modulation (4-PAM) [6], 64 Gbps over 57 m OM2 using Nonreturn to Zero (NRZ) [7], and 56 Gbps over 50 m OM4 using 8-PAM [6]. All of these require very fast electrical interfaces and suffer from low tolerance to modal dispersion compared to pass-band modulation formats [8]. Using discrete multitone (DMT) at an 850 nm window enabled a high transmission distance of 500 m MMF with a bit rate of 30-Gbps [9].

The MultiCAP solution presented in this paper achieves error-free 65.7 Gbps over 100 m and 74.7 Gbps over 1 m using an 850 nm VCSEL with a bandwidth of 10.1 GHz. This solution has the prospect of achieving 100 Gbps over 100 m MMF with emerging 25 GHz 850 nm VCSELs. It overcomes both electrical and optical bandwidth limitations towards single lane 100 Gbps active optical cable (AOC) and employs cost efficient 850 nm MMF technologies. The 400 GE standard requirement can thus be met by employing parallel optical lanes.

The client-side links of long range (LR) and extended range (ER) are expected to meet the 400 Gbps capacity by using advanced modulation formats in combination with wavelength division multiplexing (WDM) [1]. The higher the capacity per lane, the lower the number of WDM lanes and therefore the number of transceivers. The LR and ER areas in Figure 2 show the highest capacities per lane reported in O-band for different modulation formats. In the LR of 10 km, NRZ coding enables 25 Gbps [10], 4-PAM 50 Gbps [11], CAP-64QAM 60 Gbps [12], and DMT 106 Gbps [13]. MultiCAP achieves 108 Gbps per lane with 20 km reach [5]. In the ER of 40 km, NRZ coding allows 40 Gbps per lane [14]. Beyond 100 Gbit/s per lane for ER is reached by DMT modulation [15] and MultiCAP [5]. The DD/IM WDM-based 400 Gbps systems were demonstrated as feasible in several of the cited works (indicated in Figure 2). Eight lanes × 40 Gbps [14] or 16 lanes × 25-Gbps [10] were used to reach 400 Gbps with NRZ coding. A four-lane LAN-WDM 400 Gbps solution was demonstrated using DMT over 30 km [16] and MultiCAP over 40 km standard single mode fiber (SSMF) [5]. Both of them assume the 7% FEC overhead.

The main contribution of this paper is the overview of a uniform MultiCAP based solution for short, long, and extended range client-side links. In all of these scenarios the same implementation scheme can be used. We review the previously presented experimental results focusing on the implementation similarities for different client-side scenarios. We include detailed description of the performed experiments. Moreover, we present the first full description of the used equalizer that was used in previous reported experiments. Having a uniform modulation format in different links types, lengths, and different wavelength bands will not only allow for interoperability between kinds of equipment from different vendors but also reduce the cost and complexity for the clients. In this way a newly developed client’s link can leverage the already existing implementation of different link type. We show that using the same transceiver’s structure and equalization technique allows satisfying the 400 GE capacity requirement in SR, LR, and ER. MultiCAP advanced modulation format is combined with parallel optics in SR and with WDM in LR and ER. This easily applicable solution enables a simple upgrade from 100 Gbps to 400 Gbps in both 850 nm MM links and 1310 nm SM links. In the context of 400 Gbps Ethernet standardization we demonstrate that a MultiCAP based solution is feasible and worth considering for SR, LR, and ER.

2. Methods

Figure 3 depicts the experimental setup for all of the considered transmission scenarios. At the transmitter side, 5 effective number of bits (ENOB) 64 GSa/s digital-to-analog converter (DAC) is used to generate MultiCAP signal. The transmitter consists of a linear amplifier and a laser. 850 nm VCSEL is used and in SR EMLs are used in LR and ER scenarios. The channel consists of 100 m MMF for SR and 20 km of SSMF for LR. ER scenario consists of 40 km SSMF and an SOA at the receiver. The receiver consists of a photodiode, transimpedance amplifiers (TIAs), and a digital storage oscilloscope (DSO). The 400 Gbps standard requirement in the SR multimode scenario is expected to be fulfilled by parallel optics. Therefore, for the SR scenario we verify only one lane. In the LR and ER scenarios the expected solution to reach 400 Gbps is WDM. Hence, in the experimental verification of the LR and ER setup, four independent channels of DAC are used to drive four parallel lanes of WDM transmitter. Additionally, a WDM transmitter includes the WDM multiplexer and the receiver a WDM demultiplexer.

2.1. Signal Generation

The signals are generated by a 4-output 64 GSa/s digital-to-analog converter (DAC) with 5 ENOB. For signal generation, we choose a 6-band configuration of MultiCAP [3] with different modulation orders per band which result in different bit rates. Table 1 presents three configurations and Figures 4(c)4(e) depict the corresponding electrical spectra. Each MultiCAP band is constructed from a pseudorandom bit sequence (PRBS) of bits and delivers a baud rate as described in Table 1. The total number of transmitted symbols is 49146. MultiCAP symbols are generated by upsampling to 16 samples per symbol and subsequent CAP filtering. Upsampling factor is an integer multiple of baud rate of each subband. The upsampling procedure is explained in detail in [3]. The CAP filters are realized as finite impulse response (FIR) with a length of 20 symbols for SR scenario and 30 symbols for LR and ER scenarios. A roll-off coefficient of 0.05 was used at the transmitter. At the receiver, time inverted versions of the CAP filters (roll off = 0.09) are used to recover the symbol constellations. We use the MultiCAP features of power and bit loading. The constellation and power level for each band differs and is chosen empirically to best fit the signal-to-noise ratio (SNR) of the specific frequency band. The bands’ configuration and power choice depend on the frequency response of the overall system.

The frequency response of SR system is presented in Figure 4(a). A 3 dB bandwidth of 10.1 GHz, a 10 dB bandwidth of 17 GHz, and a 20 dB bandwidth of 20.1 GHz are measured. This frequency response allows for the first and the second MultiCAP bands configurations presented in Table 1. First configuration shown in Table 1 and in Figure 4(c) enables a total throughput of 70.4 Gbps (65.7 Gbps after 7% overhead forward error correction (FEC) decoding), whereas the second configuration shown in Figure 4(d) enables 80 Gbps (74.7 Gbps after 7% FEC). In these two cases, 6 MultiCAP bands occupied the bandwidth of 21 GHz. The frequency response of optical back-to-back for both LR and ER systems is presented in Figure 4(b). A 3-dB bandwidth of 8.90 GHz, a 10-dB bandwidth of 17.35 GHz, and a 20-dB bandwidth of 24 GHz are observed. The bandwidth in this case is effectively limited by the bandwidth of the DAC used. This response allowed for implementing the last band configuration from Table 1 presented in Figure 4(e). This configuration enabled throughput of 108 Gbps (100.9 Gbps after 7% FEC). Bandwidth of 26 GHz has been used for MultiCAP bands.

2.2. Short Range

A commercially available 850 nm VCSEL is used in the SR scenario. Figure 5 shows the LIV curves and the optical spectrum measured for the VCSEL. The center frequency of the VCSELs’ spectrum at 8 mA bias is 857.2 nm. The DAC output is amplified to a 1.2  Vp-p signal that is used to drive the VCSEL biased at 8 mA. An optical power of 6 dBm is launched into 100 m of OM3 compliant MMF, with a total link loss of 0.5 dB. The signal is photodetected with an 850 nm photodiode reverse biased at 4 V. The signal is then amplified to a Vp-p of 1 V and digitally stored with an 80 GSa/s DSO with a resolution of 8 bits.

2.3. Long Range and Extended Range

The signals generated by a 4-output DAC are decorrelated with delay lines. The laser source used in these scenarios is EML. The bias voltage and temperature characteristics of the EML employed in Lane 2 are presented in Figure 6. The EML’s bias voltage is −1.5 V and the MultiCAP signal has a CMOS compatible peak-to-peak voltage of 2.5 Vp-p. The center wavelengths of the EMLs in Lanes 0 to 3 are 1294 nm, 1299 nm, 1303 nm, and 1308 nm. In order to keep the wavelengths stable, temperature control is applied. The output power of the EML in the tested lane is 6 dBm. Average output power of the EMLs ranges from 4 dBm to 6 dBm.

The optical signals are combined in a LAN-WDM multiplexer (MUX) with a channel spacing of 800 GHz (G.694.1 compliant) and transmitted over 20 km or 40 km (G.652 compliant) SSMF links. MUX introduces 0.6 dB of insertion loss. The span losses are 7 dB and 14 dB, respectively. For the 40 km transmission case, a semiconductor optical amplifier (SOA) with a noise figure (NF) of 6.5 dB is employed at the receiver, before demultiplexing. At the receiver side the signal is demultiplexed by a LAN-WDM demultiplexer (DEMUX), received by a photodiode (PD), and amplified by a transimpedance amplifier (TIA). DEMUX introduces 0.9 dB of insertion loss. All of the components are 100GBASE-LR4 and ER4 compatible.

2.4. Demodulation and Equalization

The receiver consists of several digital signal processing (DSP) blocks which are implemented in Matlab environment. CAP filtering, signal downsampling, phase offset removal, and signal normalization are performed as explained in [3]. Additionally, we implement an adaptive frequency domain equalization to mitigate linear impairments. The described adaptive decision directed (DD) equalization algorithm minimizes the received constellation cluster size and quantization noise.

We define the reference constellation by the centroids found using -means algorithm which groups the received data in the clusters [17]. This reference constellation initializes the described DD equalization algorithm. Clusters’ means are the points of reference (starting decision).

We use an iterative equalizer where in every iteration the following steps are performed: first, the error is calculated based on the Euclidean distance from the closest centroid as in a least mean square (LMS) equalizer:where denotes all centroids of the reference constellation and is the received signal sample. For equalization we use fractionally spaced FIR filter with 12 taps determined empirically. The taps coefficients of the DD equalizer are updated according to the following equation:where is the equalizer coefficient, is the step size initialized as , and is the complex conjugate . Secondly, the received signal is passed through the equalizer. Finally, the iterative process reestimates the centroids of the equalized constellation and the described steps are repeated. It was experimentally determined that 2 iterations result in satisfactory equalization and further iterations do not show the performance improvement. To assure a faster convergence we implement the variable step in DD. The step size is updated in the following manner [18]:where sgn denotes a sign function.

In order to quantify the improvement due to using an equalizer, we calculate BER for the equalized and nonequalized system for three different SNR values. Moreover, we present BER calculated for the system with a standard frequency domain equalizer, namely, multimodulus algorithm (MMA). Table 2 summarizes the BERs for all equalization and SNR scenarios. Decision directed (DD) -means equalizer improves the performance in terms of BER in all three SNR scenarios. At SNR of 20.6 dB using an equalizer allows for the improvement of 0.0181 in terms of BER. In the following sections all of the presented results are equalized using DD -means algorithm.

After the signal is equalized, the EVM is calculated and BER is computed. In order to calculate bit error rate (BER), the received demodulated signal is cross-correlated with the transmitted signal and the errors are counted.

3. Experimental Results

Figure 7 shows the measured BER curves for SR scenario. We define the sensitivity at a BER of the hard decision FEC code at 7% overhead. For the reported system it is [19]. Thereby, we can observe sensitivities of 2.1, 4.7, and 5.4 dBm for experimentally obtained 70.4 Gbps over 1 m, 70.4 Gbps over 100 m, and 80 Gbps over 1 m, respectively. The measured transmission penalty after 100 m MMF is 2.5 dB.

For LR scenario, per lane received bit error ratio (BER) back-to-back (B2B) and after 20 km SSMF transmission (no SOA) of the received signal is plotted in Figure 8(a). The received optical power is measured before the MUX. For all LAN-WDM lanes, BERs are below the 7% hard decision FEC limit, and no error floor is observed within the tested power range. Receiver sensitivity at the FEC limit is dBm B2B and dBm after transmission. No transmission power penalty is observed. The results for ER scenario are presented in Figure 8(b). Received BER of a center lane and a side lane is plotted B2B and after 40 km SSMF transmission with all 4 LAN-WDM lanes simultaneously amplified by a single SOA before demultiplexing. Received optical power per channel is measured before the SOA. For comparison, the BER of a single lane (remaining three lanes switched off) is included in the graph. All LAN-WDM lanes were received with a BER below the FEC limit after 40 km SSMF transmission with a worst-case receiver sensitivity of  dBm. Presence of neighboring channels in the link does not introduce penalty in the 20 km scenario. In case of 40 km scenario, we observe a 0.5 dB power penalty for the center lanes in the 4-lane case due to interlane modulation in the SOA. In both scenarios no penalty is observed in the side lanes.

In the results presented, BER is an average of the BERs in all MultiCAP bands.

Finally, the power budget calculation is evaluated in Table 3. For the SR scenario, the optical output power measured at the output of the VCSEL is 6 dBm. The sensitivity at 7% FEC limit for 70 Gbps 1 m transmissions is equal to 2.4 dBm. Therefore power budget for this scenario is equal to 3.6 dB. In the LR and ER scenarios, the optical output power per lane is equal to 5.4 dBm. It is measured after transmitter and hence after MUX. The worst receiver sensitivity is  dBm at FEC limit in case of 20 km transmission link with no amplification. Therefore the power budget of this link is 12.6 dB. In case of 40 km transmission with SOA based amplification, the worst receiver sensitivity is  dBm at the FEC limit. Therefore for the amplified 40 km link, the power budget is 15.9 dB. The given receiver sensitivity is based on ROP measured before receiver: before PD in SR case, before DEMUX in LR case, and before SOA in ER case.

The SR scenario represents a solution for an active optical cable for data centers. In terms of power budget, the margin is necessary only for the components heating up and aging. In case of LR and ER, the calculated margin of 5.6 dB and 1.9 dB is sufficient for client-side links.

4. Discussion

In the results presented for SR, a steep roll-off of the VCSEL’s frequency response reduces the achievable capacity. We use the bit loading and power loading features of MultiCAP to overcome those limitations, at the cost of worse sensitivity. As a consequence, increasing the capacity from 70 Gbps to 80 Gbps introduces the 3.1 dB penalty in sensitivity as shown in Figure 7. The bandwidth of the existing VCSELs is not sufficient to support 100 Gbps per lane. With the proposed MultiCAP scheme, the emerging 25 Gbps VCSELs are expected to satisfy the bandwidth requirement.

The performance of the EMLs used in LR and ER is satisfactory to obtain 100 Gbps after FEC per lane. Moreover, the local area network-wavelength division multiplexing (LAN-WDM) is proved to introduce negligible penalty both for 20 km and for 40 km link. The power budget calculation indicates the maturity of the solution, which allows for link losses of 12.6 dB and 15.9 dB in LR and ER, respectively.

The clear difference in performance and achievable capacity between SR and LR, ER scenarios is attributed to the system bandwidth. Even though the 3 dB and 10 dB bandwidths are similar for both systems, the 20 dB bandwidth varies by 5 dB. For this reason, the MultiCAP in SR is recoverable when it occupies up to 21 GHz while the LR and ER signal is possible to recover when it occupies 26 GHz (Figure 4). The last band in all three scenarios is highly suppressed, but thanks to the power loading and bit loading features of MultiCAP, the information in the last band is also possible to recover if it carries QPSK.

The proposed approach for 400 Gbit/s client-side transmission links using MultiCAP modulation format represents an easily applicable solution that is robust, simple, and flexible in upgrading from 100 Gbit/s to 400 Gbit/s while operating at the O-band LAN-WDM wavelengths. Moreover, we present applicability of the MultiCAP solution in the SR multimode (MM) links. We expect that with higher bandwidth of the upcoming 850 nm VCSELs this solution will enable 100 Gbps per lane and 400 Gbps using parallel optics. This technology potentially provides a bridge for gray optics approach to client-side, inter- and intradata centers, access, and metro segments.

5. Conclusions

We present a uniform MultiCAP based solution for short range (SR) MM links, long range (LR) 20 km single mode (SM) links, and extended range (ER) 40 km SM links. The advantageous feature of MultiCAP approach of being able to assign parallel electrical interfaces of smaller bandwidth into different frequency bands overcomes both electrical and optical bandwidth limitations and eases the DSP pipelining. Its pass-band nature and multiband structure allow optimal usage of the available bandwidth maximizing obtainable capacity. In the SR scenario, we have achieved record below-FEC bit rate transmission of 65.7 Gbps over 100 m and 74.7 Gbps over 1 m for 850 nm MMF data links. For upcoming 400 GE standard long range and extended range criteria, we present a MultiCAP LAN-WDM 400 Gbps solution which uses only commercial optical components from 100GBASE-LR4 and ER4. 432 Gbit/s MultiCAP signals are transmitted over 20 km SSMF without amplification and over 40 km SSMF with SOA. Interchannel mixing in the 40 km link and in SOA is proven to be negligible for a MultiCAP IM/DD LAN-WDM system. The proposed MultiCAP approach is a robust and flexible scheme, which can cover most of the client-side scenarios, including inter- and intradata centers and up to 40 km client-side links.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.