We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon
Investor in the best interests of our community. If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
A centralized PCE implementation can consolidate multiple domains and network layers into a global network view for improved scalability and network-wide efficient resource usage. Find out more of its benefits from @TeresaMonteiro_ on the Infinera blog: https://t.co/YYgNkGdKOB
But service fulfillment in an optical transport network – that is, the activation of new digital services that may also involve the planning and provisioning of a new wavelength – is not as simple and painless as we would like. Traditionally, this is a slow, costly, and error-prone process, involving a few iterations between:
The operator’s network planning and design team, which will define the service requirements
The network infrastructure vendor’s network planning services team, which will use an expert-only offline network planning tool to compute alternative routes
The operator’s network operations team, which will deploy the final service in the field
This approach doesn’t quite meet the needs of today’s network operations and dynamic traffic patterns, where adding or changing services effectively and in real time is increasingly relevant.
When looking for service path computation in a large transport network, we want quick response and we want to be able to define the path-finding criteria and ensure that the resulting path meets all service-level agreement parameters.
Additionally, we want to be able to find and provision optimal end-to-end routes across different equipment types and multiple technologies in a simple and seamless manner, a known limitation of most distributed control planes, where routing information across different network layers is not shared.
Hero to the rescue: the path computation element A path computation element (PCE, as defined in IETF RFC 4655) is the way to address our needs. A PCE is an application that utilizes abstracted network topology and connectivity to compute a constrained path between two endpoints.
This type of context-optimized path determination offers increased flexibility and effectiveness in service routing, as it considers not only user-defined weights and constraints such as latency, modulation format, link utilization, shared risk link groups, etc., but also live network conditions – pretty much like a Google Maps navigation app for the network.
By complementing the PCE application with a provisioning engine that automates configuration of the resources in the network, the outcome is exactly what we are looking for: simple, fast, and reliable service fulfillment.
But what about multi-domain path computation? In addition to the benefits above, a centralized PCE implementation offers the potential to consolidate multiple domains and network layers into a global network view, resulting in improved scalability and network-wide efficient resource usage.
Multi-domain path computation can be achieved with a hierarchical path computation element architecture. In this architecture, there is one parent PCE and multiple child PCEs, each responsible for a subdomain, as represented in Figure 1. All paths within a subdomain are computed by a child PCE, which has only information pertaining to its specific domain. The parent PCE maintains only high-level information about each subdomain but is fully knowledgeable of the connectivity between them. The parent PCE is able to perform centralized end-to-end path computation by orchestrating the subdomains, and it associates and coordinates the topology information and routing capabilities of the multiple child PCEs.
Figure 1: Hierarchical PCE
A service that reaches its destination Some requests for new digital services over an optical transport network will run against fully utilized wavelengths, requiring that new wavelengths be lit in the network. However, the transmission of a new wavelength along a chosen path is subject to optical impairments that need to be assessed ahead of provisioning.
That should not be an issue for a powerful PCE. A PCE should be capable of interfacing with an optical performance application that models transmission in the fiber layer and validates the optical feasibility of a path. In some cases, that optical validation may not even need the superior accuracy provided by a detailed optical transmission simulation – a summary of optical performance may be enough. For simplified operation, the PCE may simply store a set of feasible optical paths between each node pair, including the information on which wavelengths are valid for a given path and modulation format, previously checked by an offline planning tool.
It goes without saying that PCE and service provisioning applications can also be used as a basis for more complex automation tasks and network programming. One example is closed-loop automation processes, where events or patterns observed in the network trigger automated actions in the same network, such as service rerouting upon failure or in anticipation of it, increasing network availability. Choosing applications that provide support for open APIs ensures they can be smoothly integrated into any network operator’s software automation environment.
As the WSTS 2021 draws to a close, Infinera’s @JonBaldry70, who spoke at the event, reflects on his key takeaways. Learn about the impact of timing and synchronization across a huge number of applications beyond just 5G, plus much more: https://t.co/06cQ7dTFdp#WSTS2021#5G
5G synchronization is a complex topic with many moving parts. In Infinera’s new e-book, we explore the challenges in delivering 5G-quality sync and how to create end-to-end sync strategies to meet 5G performance demands now and in the future. Download: https://t.co/h7ceV1Mfu0#5Gpic.twitter.com/GwLYcuGeHL
Infinera’s XR optics enables network operators like American Tower to dramatically simplify their network architectures and significantly reduce CapEx and OpEx costs while enhancing network scalability. Learn about our industry-first demonstration: https://t.co/WEOazfoN4M
This file photo taken on Sep. 29, 2020, shows the building that houses Nippon Telegraph and Telephone Corp. in Tokyo. (Kyodo) TOKYO (Kyodo) -- Cosmic rays are causing an estimated 30,000 to 40,000 malfunctions in domestic network communication devices in Japan every year, a Japanese telecom giant found recently.
Most so-called "soft errors," or temporary malfunctions, in the network hardware of Nippon Telegraph and Telephone Corp. are automatically corrected via safety devices, but experts said in some cases they may have led to disruptions.
It is the first time the actual scale of soft errors in domestic information infrastructures has become evident.
Soft errors occur when the data in an electronic device is corrupted after neutrons, produced when cosmic rays hit oxygen and nitrogen in the earth's atmosphere, collide with the semiconductors within the equipment.
Cases of soft errors have increased as electronic devices with small and high-performance semiconductors have become more common. Temporary malfunctions have sometimes led to computers and phones freezing, and have been regarded as the cause of some plane accidents abroad.
Masanori Hashimoto, professor at Osaka University's Graduate School of Information Science and Technology and an expert in soft errors, said the malfunctions have actually affected other network communication devices and electrical machineries at factories in and outside Japan.
There is a chance that "greater issues" will arise as society's infrastructure becomes "more reliant on electronic devices" that use such technologies as artificial intelligence and automated driving, Hashimoto said.
He emphasized the need for the government and businesses to further research and implement countermeasures.
However, identifying the cause of soft errors and implementing measures against them can be difficult due to them not being reproducible in trials, unlike mechanical failures.
NTT therefore measured the frequency of soft errors through an experiment whereby semiconductors are exposed to neutrons, and concluded there are about 100 errors per day in its domestic servers.
Although NTT did not reveal if network communication disruptions have actually occurred, the company said it was "implementing measures against major issues" and "confirming the quality of the safety devices and equipment design through experiments and presumptions."
5G network deployments are well underway across the globe, with many network operators now preparing for the more advanced “Phase 2” 5G services such as ultra-reliable low-latency communications (uRLLC) services. Key to enabling these advanced services is advanced radio access network (RAN) capabilities that push advanced features and performance requirements, such as significantly improved synchronization delivery to the cell tower, on to the underlying transport network.
At Infinera we’ve seen a distinct shift over recent months in network operators’ focus on synchronization distribution strategies and underlying network synchronization performance. In the first in a series of blogs covering this important topic, we’ll look at how the migration to 5G is changing network operators’ usage of global navigation satellite system (GNSS) within these networks.
The delivery of synchronization information in mobile networks is achievable through several different mechanisms and strategies. The uptake of these various options has varied across the geographic regions of the globe due to technical and geopolitical reasons. The main synchronization delivery options are:
Synchronization/timing signals from a GNSS, such as the U.S.’s Global Positioning System (GPS), Europe’s Galileo, Russia’s Global’naya Navigatsionnaya Sputnikovaya Sistema (GLONASS), or China’s BeiDou Navigation Satellite System, directly to every location requiring synchronization in the network
Synchronization/timing signals delivered from key centralized GNSS-enabled locations in the network through the backhaul/transport network to all other locations requiring synchronization
Synchronization/timing signals delivered through a totally separate synchronization delivery network
Each approach has its own strengths and weaknesses, and operators across the globe have built synchronization strategies to best suit their own environments. For example, historically GNSS using GPS to every location has been the primary mechanism in North America, whereas Europe predominantly uses synchronization through the backhaul network with GNSS limited to key timing locations.
However, in recent years there has been an increase in the incidence of both deliberate and inadvertent hacking and jamming of GNSS as the use of cheap illegal GNSS jammers has increased and as some countries have even tested GNSS jamming and/or spoofing as part of military strategies. Due to the importance of network synchronization, these factors are leading some countries to introduce legislation to force protection and reliability into synchronization networks. It is possible to protect GNSS receivers from some of this jamming, but this greatly increases the cost per node.
Another consideration that mobile network operators must take into account as they move to 5G is the proliferation of cell sites, especially those in locations that are tough to reach from a GNSS perspective. 5G in dense urban environments will require millimeter-wave small cells that provide high-bandwidth connectivity over a shorter range, and operators are planning deployments of these in tough-to-reach locations such as deep inside shopping malls, cells per floor in high-rise office buildings, etc.
It should be stressed that while GNSS networks do occasionally suffer from interference and downtime caused by natural effects or deliberate jamming/spoofing, they are still highly reliable and form a key component of most synchronization networks. There are solutions to protect GNSS and deliver GNSS signals into tough locations, but overall, these factors are causing more and more operators that were previously GNSS-focused to plan to utilize network-based synchronization as a backup to GNSS at every node. In some cases, these operators plan to migrate fully to network-based synchronization, with GNSS limited to key centralized locations in the network that use these protection and resiliency methods to harden GNSS against attacks.
Network-based synchronization can take the form of either synchronization delivery through the transport network or through a totally separate dedicated synchronization delivery network. Both approaches provide the operator with the right level of synchronization performance, and backhaul network-based synchronization offers the opportunity for significantly better overall network economics as it avoids a complete overlay network for synchronization. Wherever possible, mobile network operators typically utilize backhaul-based synchronization delivery, but it should be noted that this is not always possible, and therefore, synchronization overlay networks cannot be discounted from the discussion.
Overall, there will always be a mix of strategies deployed across the globe, but the trend is moving more and more toward network-based synchronization delivery, and due to better economics, transporting this over the backhaul network is nearly always the primary option. Those network operators that have always deployed synchronization distribution through the transport network, and those now migrating to this strategy, need to now consider how their optical transport network can best support these challenging requirements economically.
For those readers that want to dive into this topic in more detail, our new Synchronization Distribution in 5G Transport Networks e-book provides a detailed overview of synchronization distribution challenges and standardization along with an end-to-end synchronization distribution strategy that meets the demanding requirements that 5G is driving into optical networks. I’m also presenting at this year’s Workshop on Synchronization and Timing Systems (WSTS) virtual event on March 30. I’ll be outlining how we can provide 5G-quality synchronization with optical timing channel-enabled in real-world networks. I hope those interested in 5G synchronization distribution can join me at this event. You can register here.
Infinera and American Tower’s trial underscored XR optics’ ability to be inserted into existing single-fiber networks like PONs used for wireless backhaul by using current building blocks like PON filters and splitters. Fady Masoud shares how it works: https://t.co/zMiqDS4HaJ
Selecting the optimal coherent transceiver for a given application requires careful consideration of a range of factors. On the Infinera blog, Paul Momtahan looks at five CapEx factors to consider, starting with transceiver cost and cost per bit. Read now: https://t.co/QKyp17Colm
In this blog, the first in a multi-part series, I will examine the top five considerations related to CapEx.
Transceiver Cost/Cost Per Bit
A key consideration is the transceiver cost per bit for a given reach requirement, which is primarily a function of the transceiver’s wavelength capacity-reach: the maximum data rate that the transceiver can achieve for a given path through the optical network. If less than the full capacity is required, then the cost per bit needs to consider the required capacity rather than the maximum capacity. If the full capacity will only be required in five years, then a solution that enables CapEx to be more closely correlated with the actual required capacity, for example XR optics with 25 Gb/s increments, has an advantage.
Cost per bit is also a function of the unit cost of the optical transceiver. Factors that influence this include the cost of the individual components, packaging, and manufacturing. These costs will in turn be impacted by volumes, and the cost to the network operator will also be heavily influenced by competition, both direct (i.e., the same type of transceiver) and indirect (i.e., different types of transceiver), with more suppliers driving more competition. which typically reduces unit prices.
Cost of Xponders, Xponder Shelves, and Grey Interconnect Pluggables
In addition to the cost of the transceiver, the cost of any Xponders (transponders, muxponders, switchponders, etc.), the shelves that house them, and the grey interconnect pluggables on the client side of the Xponder and in the router should also be considered. Plugging the coherent transceiver directly into the router can eliminate the Xponder, Xponder shelf, and grey interconnect pluggable costs. If high-capacity wavelengths require statistical multiplexing or switching for efficient utilization, then this additional cost also needs to be considered.
Optical Line System CapEx
In brownfield scenarios, compatibility with the existing optical line system needs to be considered. As the spectral width of the wavelength is primarily a function of its baud rate, higher-baud-rate wavelengths may be incompatible with the existing DWDM grid.Even 100 GHz grid systems based on older filter and wavelength selective switching (WSS) technology will have a passband (~50 GHz) too narrow to support a 400 Gb/s wavelength with a baud rate of 60+ Gbaud. Other considerations include transmit power compatibility and out-of-band noise if colorless add/drop is a requirement, with smaller QSFP-DD-based pluggables typically facing some challenges in this regard as they lack the space for a micro-EDFA or a tuneable optical filter, which help with transmit power and out-of-band noise, respectively. Another brownfield consideration is whether the new wavelengths will interfere with the existing wavelengths, thus requiring guard bands or reducing the performance of existing wavelengths.
For greenfield scenarios, which will be the case if you do not already have flexible grid (or wide-passband fixed grid) optical line systems and wish to leverage 400G+ coherent technologies, a particular coherent transceiver may enable a more cost-effective optical line system, for example, a filterless broadcast one based on splitter/combiners or one with a reduced need for amplification. Higher-capacity wavelengths can also reduce the number of ROADM add/drop ports, thus reducing line system CapEx. Conversely, any extra line system costs incurred by a specific optical transceiver also need to be considered – for example, if a more expensive optical line system is required to compensate for any deficiencies in the coherent transceiver such as low transmit power or high out-of-band noise.
Fiber Costs: Spectral Efficiency and Fiber Capacity
The cost of the fiber itself is an important consideration, especially for long-haul, submarine, and fiber-constrained metros where the cost of acquiring and lighting new fibers is high. In these scenarios, spectral efficiency and fiber capacity can become key transceiver considerations. Spectral efficiency is largely a function of how many bits per symbol the modulation can deliver for a given reach requirement. A secondary consideration is how tightly you can pack the wavelengths together, which in turn is related to the shape of the wavelength (i.e., the percentage roll-off).
Figure 2: A wavelength with tight roll-off uses less spectrum
For example, a 400 Gb/s wavelength (~60 Gbaud, PM-16QAM) with no Nyquist shaping (i.e., 400ZR) uses more spectrum than an equivalent wavelength (~60 Gbaud, PM-16QAM) that uses Nyquist shaping and has a tight roll-off, as shown in Figure 2. With no Nyquist shaping and a relatively large roll-off, anyone deploying 400ZR has to choose between a 100 GHz grid with better performance but lower fiber capacity or a 75 GHz grid with higher fiber capacity but reduced reach due to inter-channel interference (ICI). Even Open ROADM CFP2s at 63.1 Gbaud typically require 87.5 GHz or more per channel in a mesh ROADM network. Another factor is how much correlation there is between the movement/drift of each wavelength, with shared wavelocker technology that enables multiple wavelengths to drift in unison, enabling better spectral efficiency.
Fiber capacity also needs to consider the amount of spectrum that can be used on the fiber – for example whether a particular transceiver type can support an extended C-band or the L-band (i.e., C+L). Embedded optical engines are more likely to support the L-band, though L-band coherent pluggables are also possible. A related consideration is the amount of wasted spectrum due to wavelength blocking. This can be an issue when mixing wavelengths with different baud rates on the same optical line system, which complicates channel plans, especially in mesh ROADM networks.
Router CapEx also needs to be considered. If the optical transceiver technology forces the purchase of new routers with the required port form factor, power, and thermals to support it, that can increase router CapEx. If the transceiver form factor (i.e., CFP2) or data rate (200 Gb/s for extended reach instead of 400 Gb/s) reduces router or line card efficiency in terms of faceplate density and/or throughput, that may also increase router CapEx.
On the other hand, a pluggable form factor and power/thermal envelope that is compatible with existing routers can avoid router upgrade costs. Router CapEx may also be reduced if the coherent transceiver enables more cost-effective router form factors (i.e., high-density QSFP-DD only) or the elimination of intermediate switch/router aggregation layers. Another factor to consider is load balancing efficiency, due to well-known hashing algorithm limitations in load-balancing mechanisms such as link aggregation (LAG) and equal cost multi-path (ECMP); a smaller number of high-capacity wavelengths will typically be more efficient than a large number of lower-speed wavelengths.
So, to summarize, if you want to minimize CapEx, you should consider the costs of the transceivers themselves but also any additional costs or savings related to the Xponders, grey interconnect pluggables, optical line system, fiber, and routers. In the next blog in this series, I will move to the key OpEx considerations for next-generation coherent transceiver selection.
| Light Reading News Analysis Ken Wieland, contributing editor 3/19/2021
Rob Shore, SVP of marketing at US-based Infinera, told Light Reading he expected first commercial deployments of the company's prototype XR Optics tech sometime next year.
He was also hopeful of announcing an XR Optics industry consortium of some description, comprising service providers, technology partners and even standards organizations, "within a matter of months."
With an eye on ZR+ optics, another 400G technology, Shore is keen to highlight XR Optics' "pluggable" credentials.
Infinera's XR optics enables a single transceiver to generate numerous lower-speed subcarriers that can be independently steered to different destinations. (Source: Infinera)
"ZR+ has generated a fair amount of press coverage, but there's really nothing special about it except industry standardization," asserted Shore.
"What we want to do with XR Optics is rather than just release a technology, and hope people take it, is to build an industry coalition."
Some progress has already been made on the coalition front, through partnerships with Lumentum and II-VI, although these were announced over a year ago.
Shore was nonetheless confident that industry momentum was swinging the way of XR Optics. "We've got a whole host of other equipment manufacturers and sub component manufacturers on the hook here as well," he said.
Shore was speaking to Light Reading after Infinera announced yet another successful field trial of XR Optics, but this time – somewhat unusually – with a towerco in the shape of America Tower.
Shore asserted that the proof-of-concept, which took place in Colombia, "proved once again that XR Optics' signals can coexist with PON architectures." Infinera has been involved in nearly two dozen XR Optics trials with operators globally, including BT. Only a few days prior to the American Tower PoC, the UK's Virgin Media also put XR Optics through its paces
"XR optics is the only coherent point-to-multipoint solution being proposed enabling significantly greater capacity [400G and above]," says David Welch, Infinera's founder and chief innovation officer.
"XR optics also enables efficiencies and network simplification beyond access by enabling a single transceiver to aggregate traffic from multiple lower speed transceivers anywhere in the network."
Through a glass darkly
How quickly XR Optics can gain market traction is open to debate. Heavy Reading's Sterling Perrin acknowledges the progress being made by Infinera through its various trials, but still views XR optics as very much a "future-looking technology."
Want to know more about optical? Check out our dedicated optical channel here on Light Reading."XR optics is an interesting adaptation of Nyquist subcarriers in coherent transmission that allows the individual subcarriers to be individually routed, but, at increments of nx25G, this is at least a generation ahead of next-gen PON variants," he says.
Julie Kunstler, a principal analyst at research firm Omdia, a Light Reading sister company, pointedly notes that because XR Optics is quite new, the ecosystem is inevitably immature. "It's too early to forecast a cost curve," she said.
Last week I was lucky enough to participate in the season finale of EllaLEAKS, a series of films that have helped to document the whole construction process. It was an amazing event, and I was able to talk about the project with EllaLink’s Chief Marketing and Sales Officer, Vincent Gatineau, afterwards.
Geoff: It’s great to talk to you, Vincent, and thank you for the invitation to get involved in the EllaLEAKS finale event. I have to say, having experienced so many virtual webinars over the past year, I thought it was a really great event, with a fantastic look and feel, and very professionally produced. Congratulations!
Vincent: Thank you, Geoff. It was thanks to a lot of hard work by so many people, and we’re delighted with the number of attendees at the live event. As you mentioned, we also have the whole series of films online for people to check out, and I think it’s really interesting to see the process of laying the cable, building the landing stations, and so on. We look forward to next season, which will kick off with more stories from our partners.
Geoff: Could you summarize EllaLink for us all?
Vincent: Sure. We start with a high-performance submarine cable with four fiber pairs on the trunk that follows a direct route from Portugal to Brazil. But a key aspect is that this is about joining two sets of communities that share common languages and cultures – predominantly Portuguese and Spanish, of course. The cable has been installed with a number of branching units, and will initially connect to Cabo Verde and Madeira. The system has also been designed to accommodate future subsea extensions to the Canary Islands, Morocco, Mauritania, French Guiana, and Southern Brazil. Using our fiber ring, we can connect to Madrid and Lisbon and extend to Marseille, where we hook into the massive Mediterranean cable systems that take us to the Middle East and onward to Asia. People can check out these routes on the interactive Telegeography Submarine Cable Map.
Geoff: Low latency was a big theme of the webinar…why is it so important?
Vincent: It’s about the way people are using networks today. We often think of financial trading needing low latency, but online gaming is growing rapidly across the Latin Americas region, and gamers these days are playing with other people from around the world. The response times for social media applications are also critical because of the way these applications are funded through online advertising. As said by our good friend, CEO of DE-CIX Ivo Ivanov, “latency is the next currency.”
Geoff: And how much of a reduction can you achieve? After all, we’re ultimately limited by the speed of light.
Vincent: Yes, and that’s why our direct routing is critical. To make use of high-capacity cables, data today has to go from Europe to the U.S. and then down to Brazil. The EllaLink route is only half the distance, so that means half the latency – we’re looking at less than a 60-ms round trip delay between Portugal and Brazil. We also offer direct, all-optical connections from data center to data center, glassing through at the landing station and avoiding any added latency from OTN switching. And our ability to do that is really enabled by the high performance we’ve seen from Infinera’s ICE6 technology.
Geoff: Yes, I think the performance of ICE6 has been a revelation in so many cases. What are you expecting for the cable system?
Vincent: The conservative estimate is for an end-of-life cable capacity of about 100 Tb/s on the trans-Atlantic section, and that’s 25 Tb/s per fiber pair. We can also support a range of services from 1 GbE up to 400 GbE, and we offer an open cable solution, with what will look like a virtual fiber pair to a customer. They can choose whichever transponders they wish, but the spectrum will be managed and kept stable by the Infinera Intelligent Power Management solution.
Geoff: In fact I wrote a blog about the importance of active power management that people can refer to here. You mentioned glassing through at the landing station. Wouldn’t it be easier to just put the data center where the cable lands?
Vincent: This point very much relates to which came first, the cable or the data center? Is the cable system being built to bring connectivity to an underserved area, in which case it comes before the data center, or are you building a cable system to connect into existing data center infrastructure? So glassing through at the landing station is ideal, but not possible for every system. There is a symbiotic relationship between data centers and cable systems, and in the case of EllaLink, Fortaleza is an area already rich in cable connectivity, and in Sines we deliberately selected a new site to offer diversity from the existing cable systems landing in Portugal. Where you have a busy landing location, there can be a danger of cables crossing in shallow waters and exposing the cable to hazards like anchors or fishing nets. We took a lot of care in Sines to make sure that we only cross other cables in deep ocean locations. Where we needed extra shallow water protection, we used a technique called horizontal directional drilling (HDD) to help protect the cable by burying it deep under the seabed.
Geoff: I saw the video you posted on the HDD technique, and also the construction of the cable landing station at Sines – it’s really an amazing engineering project. And I know you also have a lot of research and educational involvement on EllaLink.
Vincent: We do. One of our major anchor customers is the BELLA Consortium, who provide for the long-term interconnectivity needs of the European and Latin American research and education communities. I’m also excited to say that GÉANT and EMACOM have established the EllaLink Geolab, an initiative that aims to provide the scientific community with real-time, accurate, and relevant data on seabed conditions along the cable route. EllaLink is the first commercial telecoms submarine cable in the world to integrate SMART cable concepts into its design.
Geoff: It’s fascinating stuff, and applications like earthquake monitoring would have the potential to save so many lives, especially as sea levels rise due to climate change. Vincent, thank you for the opportunity to participate in the season finale, and congratulations on an incredible submarine network project!
Vincent: Thank you, Geoff, and I’m looking forward to a long partnership with Infinera.
Vincent Gatineau is the Chief Marketing and Sales Officer for EllaLink. Vincent was part of the Sales & Marketing team of Alcatel Submarine Networks for nine years, followed by the EllaLink project since its first days among other major systems developments. Previously, Vincent has held various international positions within the Alcatel-Lucent group in India and Chile. Vincent has an engineering degree from Institut Mines Telecom Lille Douai. He is fluent in French, English and Spanish. Infinera thanks him for his contribution.