SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksInfinera


Previous 10 Next 10 
From: FUBHO3/6/2021 12:29:03 AM
   of 4343
 



Virgin Media tests Infinera’s XR optics on PON infrastructure
Stephen Hardy

lightwaveonline.com





Infinera (NASDAQ: INFN) has announced a second major trial of its point-to-multipoint XR optics technology in the UK. Following a demonstration with BT (see “BT models, lab trials Infinera’s XR optics”), the company has revealed a trial with Virgin Media that saw XR optics applied to a PON infrastructure in Reading, UK. The results indicated the ability to support symmetrical transmission rates as high as 400 Gbps over the fiber to the premises (FTTP) network, Infinera says.

The XR optics concept, introduced in the fall of 2019 and still in the prototype phase, leverages the ability to share the capacity of a single coherent port among multiple endpoints (see “Infinera unveils XR optics single-source coherent point-to-multipoint transmission technology”). As a PON infrastructure works along somewhat similar principles, XR optics would appear to be a natural fit for such networks.

Virgin Media has been willing to trial a variety of technologies to boost the capacity of its PONs; for example, the company trialed 10G PON in 2019 with ARRIS (now part of CommScope; see “Virgin Media trials 10G-EPON with ARRIS”). “Our next-generation network already offers gigabit connectivity to more than 7 million homes, but with data use and demand for hyperfast speeds surging, we’re continually investing in our network to prepare for whatever the future brings,” commented Jeanie York, chief technology and information officer at Virgin Media. “Innovations like XR optics ensure our customers continue to benefit from the UK’s fastest widely available speeds, pave the way for future network upgrades, and help support the rollout of multi-gigabit broadband and mobile services.”

“The trial with Virgin Media provides a solid proof point that Infinera’s XR optics technology can be seamlessly applied to existing networks,” added Dave Welch, Infinera’s chief innovation officer and co-founder. “This represents a radical shift in the way networks can be built, promising a more flexible and sustainable way to meet the ever increasing need to transmit more data at higher speeds.”

Share RecommendKeepReplyMark as Last Read


From: SGJ3/11/2021 4:51:38 PM
2 Recommendations   of 4343
 
EllaLink Completes Marine Installation and Turns to Infinera for Network Lighting

finance.yahoo.com

Using ICE6-800g to light it. Lit the fuse today, up 7.34%. What investors have been waiting for.

Share RecommendKeepReplyMark as Last Read


To: FUBHO who wrote (4311)3/30/2021 7:13:49 AM
From: FUBHO
   of 4343
 
Trends and Technology: 400G Pluggable Modules for DCI to Long Haul Applications

ir.neophotonics.com

Share RecommendKeepReplyMark as Last Read


From: FUBHO4/3/2021 6:26:13 PM
   of 4343
 


infinera.com

An App for Network Navigation – Welcome to PCE - www.infinera.com


6-7 minutes





In times when customer satisfaction and business agility are critical to network operators, the ability to fulfill service requests fast, keep to service-level agreements, and accelerate revenue by quickly delivering new and more services is key.

But service fulfillment in an optical transport network – that is, the activation of new digital services that may also involve the planning and provisioning of a new wavelength – is not as simple and painless as we would like. Traditionally, this is a slow, costly, and error-prone process, involving a few iterations between:

  • The operator’s network planning and design team, which will define the service requirements
  • The network infrastructure vendor’s network planning services team, which will use an expert-only offline network planning tool to compute alternative routes
  • The operator’s network operations team, which will deploy the final service in the field
This approach doesn’t quite meet the needs of today’s network operations and dynamic traffic patterns, where adding or changing services effectively and in real time is increasingly relevant.

When looking for service path computation in a large transport network, we want quick response and we want to be able to define the path-finding criteria and ensure that the resulting path meets all service-level agreement parameters.

Additionally, we want to be able to find and provision optimal end-to-end routes across different equipment types and multiple technologies in a simple and seamless manner, a known limitation of most distributed control planes, where routing information across different network layers is not shared.

Hero to the rescue: the path computation element A path computation element (PCE, as defined in IETF RFC 4655) is the way to address our needs. A PCE is an application that utilizes abstracted network topology and connectivity to compute a constrained path between two endpoints.

This type of context-optimized path determination offers increased flexibility and effectiveness in service routing, as it considers not only user-defined weights and constraints such as latency, modulation format, link utilization, shared risk link groups, etc., but also live network conditions – pretty much like a Google Maps navigation app for the network.

By complementing the PCE application with a provisioning engine that automates configuration of the resources in the network, the outcome is exactly what we are looking for: simple, fast, and reliable service fulfillment.

But what about multi-domain path computation? In addition to the benefits above, a centralized PCE implementation offers the potential to consolidate multiple domains and network layers into a global network view, resulting in improved scalability and network-wide efficient resource usage.

Multi-domain path computation can be achieved with a hierarchical path computation element architecture. In this architecture, there is one parent PCE and multiple child PCEs, each responsible for a subdomain, as represented in Figure 1. All paths within a subdomain are computed by a child PCE, which has only information pertaining to its specific domain. The parent PCE maintains only high-level information about each subdomain but is fully knowledgeable of the connectivity between them. The parent PCE is able to perform centralized end-to-end path computation by orchestrating the subdomains, and it associates and coordinates the topology information and routing capabilities of the multiple child PCEs.



Figure 1: Hierarchical PCE

A service that reaches its destination Some requests for new digital services over an optical transport network will run against fully utilized wavelengths, requiring that new wavelengths be lit in the network. However, the transmission of a new wavelength along a chosen path is subject to optical impairments that need to be assessed ahead of provisioning.

That should not be an issue for a powerful PCE. A PCE should be capable of interfacing with an optical performance application that models transmission in the fiber layer and validates the optical feasibility of a path. In some cases, that optical validation may not even need the superior accuracy provided by a detailed optical transmission simulation – a summary of optical performance may be enough. For simplified operation, the PCE may simply store a set of feasible optical paths between each node pair, including the information on which wavelengths are valid for a given path and modulation format, previously checked by an offline planning tool.

It goes without saying that PCE and service provisioning applications can also be used as a basis for more complex automation tasks and network programming. One example is closed-loop automation processes, where events or patterns observed in the network trigger automated actions in the same network, such as service rerouting upon failure or in anticipation of it, increasing network availability. Choosing applications that provide support for open APIs ensures they can be smoothly integrated into any network operator’s software automation environment.

The value of an intelligent path computation element and service provisioning software application is crystal clear. Best-in-class PCE and service provisioning applications, such as Infinera’s Transcend path computation element and service provisioning application, offer benefits including:

  • Accelerated new service activation
  • Accelerated time to revenue
  • The elimination of operational errors
  • Reduced operational expense
  • Improved end-user experience
  • Maximized network utilization
And all this by simply enabling superior network navigation!


Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: FUBHO who wrote (4315)4/3/2021 6:29:19 PM
From: FUBHO
   of 4343
 

Share RecommendKeepReplyMark as Last Read


From: FUBHO4/5/2021 3:00:37 PM
   of 4343
 

Share RecommendKeepReplyMark as Last Read


From: FUBHO4/7/2021 11:12:56 AM
   of 4343
 



Cosmic rays causing 30,000 network malfunctions in Japan each year - The Mainichi


mainichi.jp








This file photo taken on Sep. 29, 2020, shows the building that houses Nippon Telegraph and Telephone Corp. in Tokyo. (Kyodo)
TOKYO (Kyodo) -- Cosmic rays are causing an estimated 30,000 to 40,000 malfunctions in domestic network communication devices in Japan every year, a Japanese telecom giant found recently.

Most so-called "soft errors," or temporary malfunctions, in the network hardware of Nippon Telegraph and Telephone Corp. are automatically corrected via safety devices, but experts said in some cases they may have led to disruptions.

It is the first time the actual scale of soft errors in domestic information infrastructures has become evident.

Soft errors occur when the data in an electronic device is corrupted after neutrons, produced when cosmic rays hit oxygen and nitrogen in the earth's atmosphere, collide with the semiconductors within the equipment.

Cases of soft errors have increased as electronic devices with small and high-performance semiconductors have become more common. Temporary malfunctions have sometimes led to computers and phones freezing, and have been regarded as the cause of some plane accidents abroad.

Masanori Hashimoto, professor at Osaka University's Graduate School of Information Science and Technology and an expert in soft errors, said the malfunctions have actually affected other network communication devices and electrical machineries at factories in and outside Japan.

There is a chance that "greater issues" will arise as society's infrastructure becomes "more reliant on electronic devices" that use such technologies as artificial intelligence and automated driving, Hashimoto said.

He emphasized the need for the government and businesses to further research and implement countermeasures.

However, identifying the cause of soft errors and implementing measures against them can be difficult due to them not being reproducible in trials, unlike mechanical failures.

NTT therefore measured the frequency of soft errors through an experiment whereby semiconductors are exposed to neutrons, and concluded there are about 100 errors per day in its domestic servers.

Although NTT did not reveal if network communication disruptions have actually occurred, the company said it was "implementing measures against major issues" and "confirming the quality of the safety devices and equipment design through experiments and presumptions."


Share RecommendKeepReplyMark as Last Read


From: FUBHO4/8/2021 1:17:28 PM
   of 4343
 
Will the Stars Align for 5G Ultra Low-latency Services?

- www.infinera.com www.infinera.com /blog/will-the-stars-align-for-5g-ultra-low-latency-services/tag/mobile-and-5g/
3/23/2021

5G network deployments are well underway across the globe, with many network operators now preparing for the more advanced “Phase 2” 5G services such as ultra-reliable low-latency communications (uRLLC) services. Key to enabling these advanced services is advanced radio access network (RAN) capabilities that push advanced features and performance requirements, such as significantly improved synchronization delivery to the cell tower, on to the underlying transport network.

At Infinera we’ve seen a distinct shift over recent months in network operators’ focus on synchronization distribution strategies and underlying network synchronization performance. In the first in a series of blogs covering this important topic, we’ll look at how the migration to 5G is changing network operators’ usage of global navigation satellite system (GNSS) within these networks.

The delivery of synchronization information in mobile networks is achievable through several different mechanisms and strategies. The uptake of these various options has varied across the geographic regions of the globe due to technical and geopolitical reasons. The main synchronization delivery options are:

  • Synchronization/timing signals from a GNSS, such as the U.S.’s Global Positioning System (GPS), Europe’s Galileo, Russia’s Global’naya Navigatsionnaya Sputnikovaya Sistema (GLONASS), or China’s BeiDou Navigation Satellite System, directly to every location requiring synchronization in the network
  • Synchronization/timing signals delivered from key centralized GNSS-enabled locations in the network through the backhaul/transport network to all other locations requiring synchronization
  • Synchronization/timing signals delivered through a totally separate synchronization delivery network
Each approach has its own strengths and weaknesses, and operators across the globe have built synchronization strategies to best suit their own environments. For example, historically GNSS using GPS to every location has been the primary mechanism in North America, whereas Europe predominantly uses synchronization through the backhaul network with GNSS limited to key timing locations.

However, in recent years there has been an increase in the incidence of both deliberate and inadvertent hacking and jamming of GNSS as the use of cheap illegal GNSS jammers has increased and as some countries have even tested GNSS jamming and/or spoofing as part of military strategies. Due to the importance of network synchronization, these factors are leading some countries to introduce legislation to force protection and reliability into synchronization networks. It is possible to protect GNSS receivers from some of this jamming, but this greatly increases the cost per node.

Another consideration that mobile network operators must take into account as they move to 5G is the proliferation of cell sites, especially those in locations that are tough to reach from a GNSS perspective. 5G in dense urban environments will require millimeter-wave small cells that provide high-bandwidth connectivity over a shorter range, and operators are planning deployments of these in tough-to-reach locations such as deep inside shopping malls, cells per floor in high-rise office buildings, etc.

It should be stressed that while GNSS networks do occasionally suffer from interference and downtime caused by natural effects or deliberate jamming/spoofing, they are still highly reliable and form a key component of most synchronization networks. There are solutions to protect GNSS and deliver GNSS signals into tough locations, but overall, these factors are causing more and more operators that were previously GNSS-focused to plan to utilize network-based synchronization as a backup to GNSS at every node. In some cases, these operators plan to migrate fully to network-based synchronization, with GNSS limited to key centralized locations in the network that use these protection and resiliency methods to harden GNSS against attacks.

Network-based synchronization can take the form of either synchronization delivery through the transport network or through a totally separate dedicated synchronization delivery network. Both approaches provide the operator with the right level of synchronization performance, and backhaul network-based synchronization offers the opportunity for significantly better overall network economics as it avoids a complete overlay network for synchronization. Wherever possible, mobile network operators typically utilize backhaul-based synchronization delivery, but it should be noted that this is not always possible, and therefore, synchronization overlay networks cannot be discounted from the discussion.

Overall, there will always be a mix of strategies deployed across the globe, but the trend is moving more and more toward network-based synchronization delivery, and due to better economics, transporting this over the backhaul network is nearly always the primary option. Those network operators that have always deployed synchronization distribution through the transport network, and those now migrating to this strategy, need to now consider how their optical transport network can best support these challenging requirements economically.

For those readers that want to dive into this topic in more detail, our new Synchronization Distribution in 5G Transport Networks e-book provides a detailed overview of synchronization distribution challenges and standardization along with an end-to-end synchronization distribution strategy that meets the demanding requirements that 5G is driving into optical networks. I’m also presenting at this year’s Workshop on Synchronization and Timing Systems (WSTS) virtual event on March 30. I’ll be outlining how we can provide 5G-quality synchronization with optical timing channel-enabled in real-world networks. I hope those interested in 5G synchronization distribution can join me at this event. You can register here.

Share RecommendKeepReplyMark as Last Read


From: FUBHO4/9/2021 11:12:08 AM
   of 4343
 

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: FUBHO who wrote (4320)4/9/2021 11:13:03 AM
From: FUBHO
   of 4343
 
www.infinera.com /blog/the-top-5-capex-considerations-when-choosing-coherent-transceivers/tag/optical

The Top 5 CapEx Considerations When Choosing Coherent Transceivers

- www.infinera.com


4/8/2021

Digital ASIC/DSPs based on 7-nm CMOS technology and advanced photonic integration based on indium phosphide or silicon photonics are enabling a wide range of new coherent transceiver types, including 100ZR and 400ZR pluggables, OpenZR+ 400G pluggables, Open ROADM 400G ZR+ pluggables, XR optics, and embedded 800G. Selecting the optimal engine for a given application requires the careful consideration of a wide range of factors, as shown in Figure 1.

Figure 1: Coherent transceiver selection considerations

In this blog, the first in a multi-part series, I will examine the top five considerations related to CapEx.

Transceiver Cost/Cost Per Bit

A key consideration is the transceiver cost per bit for a given reach requirement, which is primarily a function of the transceiver’s wavelength capacity-reach: the maximum data rate that the transceiver can achieve for a given path through the optical network. If less than the full capacity is required, then the cost per bit needs to consider the required capacity rather than the maximum capacity. If the full capacity will only be required in five years, then a solution that enables CapEx to be more closely correlated with the actual required capacity, for example XR optics with 25 Gb/s increments, has an advantage.

Cost per bit is also a function of the unit cost of the optical transceiver. Factors that influence this include the cost of the individual components, packaging, and manufacturing. These costs will in turn be impacted by volumes, and the cost to the network operator will also be heavily influenced by competition, both direct (i.e., the same type of transceiver) and indirect (i.e., different types of transceiver), with more suppliers driving more competition. which typically reduces unit prices.

Cost of Xponders, Xponder Shelves, and Grey Interconnect Pluggables

In addition to the cost of the transceiver, the cost of any Xponders (transponders, muxponders, switchponders, etc.), the shelves that house them, and the grey interconnect pluggables on the client side of the Xponder and in the router should also be considered. Plugging the coherent transceiver directly into the router can eliminate the Xponder, Xponder shelf, and grey interconnect pluggable costs. If high-capacity wavelengths require statistical multiplexing or switching for efficient utilization, then this additional cost also needs to be considered.

Optical Line System CapEx

In brownfield scenarios, compatibility with the existing optical line system needs to be considered. As the spectral width of the wavelength is primarily a function of its baud rate, higher-baud-rate wavelengths may be incompatible with the existing DWDM grid. Even 100 GHz grid systems based on older filter and wavelength selective switching (WSS) technology will have a passband (~50 GHz) too narrow to support a 400 Gb/s wavelength with a baud rate of 60+ Gbaud. Other considerations include transmit power compatibility and out-of-band noise if colorless add/drop is a requirement, with smaller QSFP-DD-based pluggables typically facing some challenges in this regard as they lack the space for a micro-EDFA or a tuneable optical filter, which help with transmit power and out-of-band noise, respectively. Another brownfield consideration is whether the new wavelengths will interfere with the existing wavelengths, thus requiring guard bands or reducing the performance of existing wavelengths.

For greenfield scenarios, which will be the case if you do not already have flexible grid (or wide-passband fixed grid) optical line systems and wish to leverage 400G+ coherent technologies, a particular coherent transceiver may enable a more cost-effective optical line system, for example, a filterless broadcast one based on splitter/combiners or one with a reduced need for amplification. Higher-capacity wavelengths can also reduce the number of ROADM add/drop ports, thus reducing line system CapEx. Conversely, any extra line system costs incurred by a specific optical transceiver also need to be considered – for example, if a more expensive optical line system is required to compensate for any deficiencies in the coherent transceiver such as low transmit power or high out-of-band noise.

Fiber Costs: Spectral Efficiency and Fiber Capacity

The cost of the fiber itself is an important consideration, especially for long-haul, submarine, and fiber-constrained metros where the cost of acquiring and lighting new fibers is high. In these scenarios, spectral efficiency and fiber capacity can become key transceiver considerations. Spectral efficiency is largely a function of how many bits per symbol the modulation can deliver for a given reach requirement. A secondary consideration is how tightly you can pack the wavelengths together, which in turn is related to the shape of the wavelength (i.e., the percentage roll-off).



Figure 2: A wavelength with tight roll-off uses less spectrum

For example, a 400 Gb/s wavelength (~60 Gbaud, PM-16QAM) with no Nyquist shaping (i.e., 400ZR) uses more spectrum than an equivalent wavelength (~60 Gbaud, PM-16QAM) that uses Nyquist shaping and has a tight roll-off, as shown in Figure 2. With no Nyquist shaping and a relatively large roll-off, anyone deploying 400ZR has to choose between a 100 GHz grid with better performance but lower fiber capacity or a 75 GHz grid with higher fiber capacity but reduced reach due to inter-channel interference (ICI). Even Open ROADM CFP2s at 63.1 Gbaud typically require 87.5 GHz or more per channel in a mesh ROADM network. Another factor is how much correlation there is between the movement/drift of each wavelength, with shared wavelocker technology that enables multiple wavelengths to drift in unison, enabling better spectral efficiency.

Fiber capacity also needs to consider the amount of spectrum that can be used on the fiber – for example whether a particular transceiver type can support an extended C-band or the L-band (i.e., C+L). Embedded optical engines are more likely to support the L-band, though L-band coherent pluggables are also possible. A related consideration is the amount of wasted spectrum due to wavelength blocking. This can be an issue when mixing wavelengths with different baud rates on the same optical line system, which complicates channel plans, especially in mesh ROADM networks.

Router CapEx

Router CapEx also needs to be considered. If the optical transceiver technology forces the purchase of new routers with the required port form factor, power, and thermals to support it, that can increase router CapEx. If the transceiver form factor (i.e., CFP2) or data rate (200 Gb/s for extended reach instead of 400 Gb/s) reduces router or line card efficiency in terms of faceplate density and/or throughput, that may also increase router CapEx.

On the other hand, a pluggable form factor and power/thermal envelope that is compatible with existing routers can avoid router upgrade costs. Router CapEx may also be reduced if the coherent transceiver enables more cost-effective router form factors (i.e., high-density QSFP-DD only) or the elimination of intermediate switch/router aggregation layers. Another factor to consider is load balancing efficiency, due to well-known hashing algorithm limitations in load-balancing mechanisms such as link aggregation (LAG) and equal cost multi-path (ECMP); a smaller number of high-capacity wavelengths will typically be more efficient than a large number of lower-speed wavelengths.

So, to summarize, if you want to minimize CapEx, you should consider the costs of the transceivers themselves but also any additional costs or savings related to the Xponders, grey interconnect pluggables, optical line system, fiber, and routers. In the next blog in this series, I will move to the key OpEx considerations for next-generation coherent transceiver selection.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10