We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon
Investor in the best interests of our community. If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
This file photo taken on Sep. 29, 2020, shows the building that houses Nippon Telegraph and Telephone Corp. in Tokyo. (Kyodo) TOKYO (Kyodo) -- Cosmic rays are causing an estimated 30,000 to 40,000 malfunctions in domestic network communication devices in Japan every year, a Japanese telecom giant found recently.
Most so-called "soft errors," or temporary malfunctions, in the network hardware of Nippon Telegraph and Telephone Corp. are automatically corrected via safety devices, but experts said in some cases they may have led to disruptions.
It is the first time the actual scale of soft errors in domestic information infrastructures has become evident.
Soft errors occur when the data in an electronic device is corrupted after neutrons, produced when cosmic rays hit oxygen and nitrogen in the earth's atmosphere, collide with the semiconductors within the equipment.
Cases of soft errors have increased as electronic devices with small and high-performance semiconductors have become more common. Temporary malfunctions have sometimes led to computers and phones freezing, and have been regarded as the cause of some plane accidents abroad.
Masanori Hashimoto, professor at Osaka University's Graduate School of Information Science and Technology and an expert in soft errors, said the malfunctions have actually affected other network communication devices and electrical machineries at factories in and outside Japan.
There is a chance that "greater issues" will arise as society's infrastructure becomes "more reliant on electronic devices" that use such technologies as artificial intelligence and automated driving, Hashimoto said.
He emphasized the need for the government and businesses to further research and implement countermeasures.
However, identifying the cause of soft errors and implementing measures against them can be difficult due to them not being reproducible in trials, unlike mechanical failures.
NTT therefore measured the frequency of soft errors through an experiment whereby semiconductors are exposed to neutrons, and concluded there are about 100 errors per day in its domestic servers.
Although NTT did not reveal if network communication disruptions have actually occurred, the company said it was "implementing measures against major issues" and "confirming the quality of the safety devices and equipment design through experiments and presumptions."
5G network deployments are well underway across the globe, with many network operators now preparing for the more advanced “Phase 2” 5G services such as ultra-reliable low-latency communications (uRLLC) services. Key to enabling these advanced services is advanced radio access network (RAN) capabilities that push advanced features and performance requirements, such as significantly improved synchronization delivery to the cell tower, on to the underlying transport network.
At Infinera we’ve seen a distinct shift over recent months in network operators’ focus on synchronization distribution strategies and underlying network synchronization performance. In the first in a series of blogs covering this important topic, we’ll look at how the migration to 5G is changing network operators’ usage of global navigation satellite system (GNSS) within these networks.
The delivery of synchronization information in mobile networks is achievable through several different mechanisms and strategies. The uptake of these various options has varied across the geographic regions of the globe due to technical and geopolitical reasons. The main synchronization delivery options are:
Synchronization/timing signals from a GNSS, such as the U.S.’s Global Positioning System (GPS), Europe’s Galileo, Russia’s Global’naya Navigatsionnaya Sputnikovaya Sistema (GLONASS), or China’s BeiDou Navigation Satellite System, directly to every location requiring synchronization in the network
Synchronization/timing signals delivered from key centralized GNSS-enabled locations in the network through the backhaul/transport network to all other locations requiring synchronization
Synchronization/timing signals delivered through a totally separate synchronization delivery network
Each approach has its own strengths and weaknesses, and operators across the globe have built synchronization strategies to best suit their own environments. For example, historically GNSS using GPS to every location has been the primary mechanism in North America, whereas Europe predominantly uses synchronization through the backhaul network with GNSS limited to key timing locations.
However, in recent years there has been an increase in the incidence of both deliberate and inadvertent hacking and jamming of GNSS as the use of cheap illegal GNSS jammers has increased and as some countries have even tested GNSS jamming and/or spoofing as part of military strategies. Due to the importance of network synchronization, these factors are leading some countries to introduce legislation to force protection and reliability into synchronization networks. It is possible to protect GNSS receivers from some of this jamming, but this greatly increases the cost per node.
Another consideration that mobile network operators must take into account as they move to 5G is the proliferation of cell sites, especially those in locations that are tough to reach from a GNSS perspective. 5G in dense urban environments will require millimeter-wave small cells that provide high-bandwidth connectivity over a shorter range, and operators are planning deployments of these in tough-to-reach locations such as deep inside shopping malls, cells per floor in high-rise office buildings, etc.
It should be stressed that while GNSS networks do occasionally suffer from interference and downtime caused by natural effects or deliberate jamming/spoofing, they are still highly reliable and form a key component of most synchronization networks. There are solutions to protect GNSS and deliver GNSS signals into tough locations, but overall, these factors are causing more and more operators that were previously GNSS-focused to plan to utilize network-based synchronization as a backup to GNSS at every node. In some cases, these operators plan to migrate fully to network-based synchronization, with GNSS limited to key centralized locations in the network that use these protection and resiliency methods to harden GNSS against attacks.
Network-based synchronization can take the form of either synchronization delivery through the transport network or through a totally separate dedicated synchronization delivery network. Both approaches provide the operator with the right level of synchronization performance, and backhaul network-based synchronization offers the opportunity for significantly better overall network economics as it avoids a complete overlay network for synchronization. Wherever possible, mobile network operators typically utilize backhaul-based synchronization delivery, but it should be noted that this is not always possible, and therefore, synchronization overlay networks cannot be discounted from the discussion.
Overall, there will always be a mix of strategies deployed across the globe, but the trend is moving more and more toward network-based synchronization delivery, and due to better economics, transporting this over the backhaul network is nearly always the primary option. Those network operators that have always deployed synchronization distribution through the transport network, and those now migrating to this strategy, need to now consider how their optical transport network can best support these challenging requirements economically.
For those readers that want to dive into this topic in more detail, our new Synchronization Distribution in 5G Transport Networks e-book provides a detailed overview of synchronization distribution challenges and standardization along with an end-to-end synchronization distribution strategy that meets the demanding requirements that 5G is driving into optical networks. I’m also presenting at this year’s Workshop on Synchronization and Timing Systems (WSTS) virtual event on March 30. I’ll be outlining how we can provide 5G-quality synchronization with optical timing channel-enabled in real-world networks. I hope those interested in 5G synchronization distribution can join me at this event. You can register here.
Infinera and American Tower’s trial underscored XR optics’ ability to be inserted into existing single-fiber networks like PONs used for wireless backhaul by using current building blocks like PON filters and splitters. Fady Masoud shares how it works: https://t.co/zMiqDS4HaJ
Selecting the optimal coherent transceiver for a given application requires careful consideration of a range of factors. On the Infinera blog, Paul Momtahan looks at five CapEx factors to consider, starting with transceiver cost and cost per bit. Read now: https://t.co/QKyp17Colm
In this blog, the first in a multi-part series, I will examine the top five considerations related to CapEx.
Transceiver Cost/Cost Per Bit
A key consideration is the transceiver cost per bit for a given reach requirement, which is primarily a function of the transceiver’s wavelength capacity-reach: the maximum data rate that the transceiver can achieve for a given path through the optical network. If less than the full capacity is required, then the cost per bit needs to consider the required capacity rather than the maximum capacity. If the full capacity will only be required in five years, then a solution that enables CapEx to be more closely correlated with the actual required capacity, for example XR optics with 25 Gb/s increments, has an advantage.
Cost per bit is also a function of the unit cost of the optical transceiver. Factors that influence this include the cost of the individual components, packaging, and manufacturing. These costs will in turn be impacted by volumes, and the cost to the network operator will also be heavily influenced by competition, both direct (i.e., the same type of transceiver) and indirect (i.e., different types of transceiver), with more suppliers driving more competition. which typically reduces unit prices.
Cost of Xponders, Xponder Shelves, and Grey Interconnect Pluggables
In addition to the cost of the transceiver, the cost of any Xponders (transponders, muxponders, switchponders, etc.), the shelves that house them, and the grey interconnect pluggables on the client side of the Xponder and in the router should also be considered. Plugging the coherent transceiver directly into the router can eliminate the Xponder, Xponder shelf, and grey interconnect pluggable costs. If high-capacity wavelengths require statistical multiplexing or switching for efficient utilization, then this additional cost also needs to be considered.
Optical Line System CapEx
In brownfield scenarios, compatibility with the existing optical line system needs to be considered. As the spectral width of the wavelength is primarily a function of its baud rate, higher-baud-rate wavelengths may be incompatible with the existing DWDM grid.Even 100 GHz grid systems based on older filter and wavelength selective switching (WSS) technology will have a passband (~50 GHz) too narrow to support a 400 Gb/s wavelength with a baud rate of 60+ Gbaud. Other considerations include transmit power compatibility and out-of-band noise if colorless add/drop is a requirement, with smaller QSFP-DD-based pluggables typically facing some challenges in this regard as they lack the space for a micro-EDFA or a tuneable optical filter, which help with transmit power and out-of-band noise, respectively. Another brownfield consideration is whether the new wavelengths will interfere with the existing wavelengths, thus requiring guard bands or reducing the performance of existing wavelengths.
For greenfield scenarios, which will be the case if you do not already have flexible grid (or wide-passband fixed grid) optical line systems and wish to leverage 400G+ coherent technologies, a particular coherent transceiver may enable a more cost-effective optical line system, for example, a filterless broadcast one based on splitter/combiners or one with a reduced need for amplification. Higher-capacity wavelengths can also reduce the number of ROADM add/drop ports, thus reducing line system CapEx. Conversely, any extra line system costs incurred by a specific optical transceiver also need to be considered – for example, if a more expensive optical line system is required to compensate for any deficiencies in the coherent transceiver such as low transmit power or high out-of-band noise.
Fiber Costs: Spectral Efficiency and Fiber Capacity
The cost of the fiber itself is an important consideration, especially for long-haul, submarine, and fiber-constrained metros where the cost of acquiring and lighting new fibers is high. In these scenarios, spectral efficiency and fiber capacity can become key transceiver considerations. Spectral efficiency is largely a function of how many bits per symbol the modulation can deliver for a given reach requirement. A secondary consideration is how tightly you can pack the wavelengths together, which in turn is related to the shape of the wavelength (i.e., the percentage roll-off).
Figure 2: A wavelength with tight roll-off uses less spectrum
For example, a 400 Gb/s wavelength (~60 Gbaud, PM-16QAM) with no Nyquist shaping (i.e., 400ZR) uses more spectrum than an equivalent wavelength (~60 Gbaud, PM-16QAM) that uses Nyquist shaping and has a tight roll-off, as shown in Figure 2. With no Nyquist shaping and a relatively large roll-off, anyone deploying 400ZR has to choose between a 100 GHz grid with better performance but lower fiber capacity or a 75 GHz grid with higher fiber capacity but reduced reach due to inter-channel interference (ICI). Even Open ROADM CFP2s at 63.1 Gbaud typically require 87.5 GHz or more per channel in a mesh ROADM network. Another factor is how much correlation there is between the movement/drift of each wavelength, with shared wavelocker technology that enables multiple wavelengths to drift in unison, enabling better spectral efficiency.
Fiber capacity also needs to consider the amount of spectrum that can be used on the fiber – for example whether a particular transceiver type can support an extended C-band or the L-band (i.e., C+L). Embedded optical engines are more likely to support the L-band, though L-band coherent pluggables are also possible. A related consideration is the amount of wasted spectrum due to wavelength blocking. This can be an issue when mixing wavelengths with different baud rates on the same optical line system, which complicates channel plans, especially in mesh ROADM networks.
Router CapEx also needs to be considered. If the optical transceiver technology forces the purchase of new routers with the required port form factor, power, and thermals to support it, that can increase router CapEx. If the transceiver form factor (i.e., CFP2) or data rate (200 Gb/s for extended reach instead of 400 Gb/s) reduces router or line card efficiency in terms of faceplate density and/or throughput, that may also increase router CapEx.
On the other hand, a pluggable form factor and power/thermal envelope that is compatible with existing routers can avoid router upgrade costs. Router CapEx may also be reduced if the coherent transceiver enables more cost-effective router form factors (i.e., high-density QSFP-DD only) or the elimination of intermediate switch/router aggregation layers. Another factor to consider is load balancing efficiency, due to well-known hashing algorithm limitations in load-balancing mechanisms such as link aggregation (LAG) and equal cost multi-path (ECMP); a smaller number of high-capacity wavelengths will typically be more efficient than a large number of lower-speed wavelengths.
So, to summarize, if you want to minimize CapEx, you should consider the costs of the transceivers themselves but also any additional costs or savings related to the Xponders, grey interconnect pluggables, optical line system, fiber, and routers. In the next blog in this series, I will move to the key OpEx considerations for next-generation coherent transceiver selection.
| Light Reading News Analysis Ken Wieland, contributing editor 3/19/2021
Rob Shore, SVP of marketing at US-based Infinera, told Light Reading he expected first commercial deployments of the company's prototype XR Optics tech sometime next year.
He was also hopeful of announcing an XR Optics industry consortium of some description, comprising service providers, technology partners and even standards organizations, "within a matter of months."
With an eye on ZR+ optics, another 400G technology, Shore is keen to highlight XR Optics' "pluggable" credentials.
Infinera's XR optics enables a single transceiver to generate numerous lower-speed subcarriers that can be independently steered to different destinations. (Source: Infinera)
"ZR+ has generated a fair amount of press coverage, but there's really nothing special about it except industry standardization," asserted Shore.
"What we want to do with XR Optics is rather than just release a technology, and hope people take it, is to build an industry coalition."
Some progress has already been made on the coalition front, through partnerships with Lumentum and II-VI, although these were announced over a year ago.
Shore was nonetheless confident that industry momentum was swinging the way of XR Optics. "We've got a whole host of other equipment manufacturers and sub component manufacturers on the hook here as well," he said.
Shore was speaking to Light Reading after Infinera announced yet another successful field trial of XR Optics, but this time – somewhat unusually – with a towerco in the shape of America Tower.
Shore asserted that the proof-of-concept, which took place in Colombia, "proved once again that XR Optics' signals can coexist with PON architectures." Infinera has been involved in nearly two dozen XR Optics trials with operators globally, including BT. Only a few days prior to the American Tower PoC, the UK's Virgin Media also put XR Optics through its paces
"XR optics is the only coherent point-to-multipoint solution being proposed enabling significantly greater capacity [400G and above]," says David Welch, Infinera's founder and chief innovation officer.
"XR optics also enables efficiencies and network simplification beyond access by enabling a single transceiver to aggregate traffic from multiple lower speed transceivers anywhere in the network."
Through a glass darkly
How quickly XR Optics can gain market traction is open to debate. Heavy Reading's Sterling Perrin acknowledges the progress being made by Infinera through its various trials, but still views XR optics as very much a "future-looking technology."
Want to know more about optical? Check out our dedicated optical channel here on Light Reading."XR optics is an interesting adaptation of Nyquist subcarriers in coherent transmission that allows the individual subcarriers to be individually routed, but, at increments of nx25G, this is at least a generation ahead of next-gen PON variants," he says.
Julie Kunstler, a principal analyst at research firm Omdia, a Light Reading sister company, pointedly notes that because XR Optics is quite new, the ecosystem is inevitably immature. "It's too early to forecast a cost curve," she said.
Last week I was lucky enough to participate in the season finale of EllaLEAKS, a series of films that have helped to document the whole construction process. It was an amazing event, and I was able to talk about the project with EllaLink’s Chief Marketing and Sales Officer, Vincent Gatineau, afterwards.
Geoff: It’s great to talk to you, Vincent, and thank you for the invitation to get involved in the EllaLEAKS finale event. I have to say, having experienced so many virtual webinars over the past year, I thought it was a really great event, with a fantastic look and feel, and very professionally produced. Congratulations!
Vincent: Thank you, Geoff. It was thanks to a lot of hard work by so many people, and we’re delighted with the number of attendees at the live event. As you mentioned, we also have the whole series of films online for people to check out, and I think it’s really interesting to see the process of laying the cable, building the landing stations, and so on. We look forward to next season, which will kick off with more stories from our partners.
Geoff: Could you summarize EllaLink for us all?
Vincent: Sure. We start with a high-performance submarine cable with four fiber pairs on the trunk that follows a direct route from Portugal to Brazil. But a key aspect is that this is about joining two sets of communities that share common languages and cultures – predominantly Portuguese and Spanish, of course. The cable has been installed with a number of branching units, and will initially connect to Cabo Verde and Madeira. The system has also been designed to accommodate future subsea extensions to the Canary Islands, Morocco, Mauritania, French Guiana, and Southern Brazil. Using our fiber ring, we can connect to Madrid and Lisbon and extend to Marseille, where we hook into the massive Mediterranean cable systems that take us to the Middle East and onward to Asia. People can check out these routes on the interactive Telegeography Submarine Cable Map.
Geoff: Low latency was a big theme of the webinar…why is it so important?
Vincent: It’s about the way people are using networks today. We often think of financial trading needing low latency, but online gaming is growing rapidly across the Latin Americas region, and gamers these days are playing with other people from around the world. The response times for social media applications are also critical because of the way these applications are funded through online advertising. As said by our good friend, CEO of DE-CIX Ivo Ivanov, “latency is the next currency.”
Geoff: And how much of a reduction can you achieve? After all, we’re ultimately limited by the speed of light.
Vincent: Yes, and that’s why our direct routing is critical. To make use of high-capacity cables, data today has to go from Europe to the U.S. and then down to Brazil. The EllaLink route is only half the distance, so that means half the latency – we’re looking at less than a 60-ms round trip delay between Portugal and Brazil. We also offer direct, all-optical connections from data center to data center, glassing through at the landing station and avoiding any added latency from OTN switching. And our ability to do that is really enabled by the high performance we’ve seen from Infinera’s ICE6 technology.
Geoff: Yes, I think the performance of ICE6 has been a revelation in so many cases. What are you expecting for the cable system?
Vincent: The conservative estimate is for an end-of-life cable capacity of about 100 Tb/s on the trans-Atlantic section, and that’s 25 Tb/s per fiber pair. We can also support a range of services from 1 GbE up to 400 GbE, and we offer an open cable solution, with what will look like a virtual fiber pair to a customer. They can choose whichever transponders they wish, but the spectrum will be managed and kept stable by the Infinera Intelligent Power Management solution.
Geoff: In fact I wrote a blog about the importance of active power management that people can refer to here. You mentioned glassing through at the landing station. Wouldn’t it be easier to just put the data center where the cable lands?
Vincent: This point very much relates to which came first, the cable or the data center? Is the cable system being built to bring connectivity to an underserved area, in which case it comes before the data center, or are you building a cable system to connect into existing data center infrastructure? So glassing through at the landing station is ideal, but not possible for every system. There is a symbiotic relationship between data centers and cable systems, and in the case of EllaLink, Fortaleza is an area already rich in cable connectivity, and in Sines we deliberately selected a new site to offer diversity from the existing cable systems landing in Portugal. Where you have a busy landing location, there can be a danger of cables crossing in shallow waters and exposing the cable to hazards like anchors or fishing nets. We took a lot of care in Sines to make sure that we only cross other cables in deep ocean locations. Where we needed extra shallow water protection, we used a technique called horizontal directional drilling (HDD) to help protect the cable by burying it deep under the seabed.
Geoff: I saw the video you posted on the HDD technique, and also the construction of the cable landing station at Sines – it’s really an amazing engineering project. And I know you also have a lot of research and educational involvement on EllaLink.
Vincent: We do. One of our major anchor customers is the BELLA Consortium, who provide for the long-term interconnectivity needs of the European and Latin American research and education communities. I’m also excited to say that GÉANT and EMACOM have established the EllaLink Geolab, an initiative that aims to provide the scientific community with real-time, accurate, and relevant data on seabed conditions along the cable route. EllaLink is the first commercial telecoms submarine cable in the world to integrate SMART cable concepts into its design.
Geoff: It’s fascinating stuff, and applications like earthquake monitoring would have the potential to save so many lives, especially as sea levels rise due to climate change. Vincent, thank you for the opportunity to participate in the season finale, and congratulations on an incredible submarine network project!
Vincent: Thank you, Geoff, and I’m looking forward to a long partnership with Infinera.
Vincent Gatineau is the Chief Marketing and Sales Officer for EllaLink. Vincent was part of the Sales & Marketing team of Alcatel Submarine Networks for nine years, followed by the EllaLink project since its first days among other major systems developments. Previously, Vincent has held various international positions within the Alcatel-Lucent group in India and Chile. Vincent has an engineering degree from Institut Mines Telecom Lille Douai. He is fluent in French, English and Spanish. Infinera thanks him for his contribution.
Infinera Expands Open Optical Portfolio to Include Latest Generation of 400G Pluggable Optics
SAN JOSE, Calif., May 04, 2021 (GLOBE NEWSWIRE) -- Infinera (NASDAQ: INFN) announced today the availability of metro-optimized 400G pluggable optics-based solutions for the XTM Series and GX Series Compact Modular Platforms, bringing enhanced flexibility and economics to metro networks. The new capabilities complement the company’s 600G/800G embedded optics technology, enabling network operators to cost-effectively scale their networks to meet the relentless growth of bandwidth with optimized optical networking solutions from the edge to the core.
The new XTM Series Enhanced 400G Flexponder module and GX Series CHM1R Open ROADM-compliant dual-400G sled will support a broad range of 400G pluggable optics, including 400G XR/ZR+ optics. In addition to point-to-point applications, the XTM and GX will leverage the point-to-multipoint capabilities of XR optics to substantially simplify networks and drive down costs. The combination of 400G support across both platforms enables network operators to support optimized solutions across both 300 mm- and 600 mm-deep network infrastructure with industry-leading low power and high density.
“The move to 400G in metro/regional optimized DWDM platforms is a major step that we welcome,” said Dave Eddy, Chief Operating Officer at Neos Networks. “Our extensive U.K. network is built on the XTM Series, and 400G capabilities provide Neos Networks with another option for those segments in our network that see the highest demand. We look forward to capitalising on this technology in our network to enable the company to maintain its position of running one of the most advanced optical networks across the U.K.”
“Pluggable optics have always been at the heart of our metro strategy, and over the years we have achieved many industry firsts with the use of pluggable optics in transport platforms,” said Glenn Laxdal, Senior Vice President, Global Product Line Management at Infinera. “Expanding our capabilities to include the latest generation of 400G optics, combined with our industry-leading 600G/800G optics, provides customers with best-in-class solutions to address applications across their networks.”
The ongoing global chip shortage is affecting industries ranging from automobiles to video games. And the telecom industry has not been left out.
"I'm a little skittish," AT&T CEO John Stankey said of the shortages. "I mean, we're seeing dynamics that are occurring in the global supply chain where unexpected things are popping up. And it is possible that we could see certain element shortages that start to crop up as everybody is racing to put stuff up on [cell] towers in May. And that's why I want to be a little bit cautious."
Other executives though said they have not yet seen any effects at all.
"We're seeing no supply issues and we're forecasting no supply issues on either network gear or smartphones," said T-Mobile CEO Mike Sievert.
Meanwhile, vendors are fighting to make sure they have the supplies they need to continue to meet demand.
"There were shortages of supply and we obviously chased it, and you just basically put more money on the table and you get the necessary products you need," said Viavi CEO Oleg Khaykin, according to a Seeking Alpha transcript of his comments. "We were able to meet all our customer demands and not miss any of our deliveries."
Apple executives too said they struggled to meet record demand. "We did not have a material supply shortage," said Apple CEO Tim Cook of the company's most recent, blockbuster quarter. Seeking Alpha provided a transcript of his comments. "And so how are we able to do that? You wind up collapsing all of your buffers and offsets. And that happens all the way through the supply chain. And so that enables you to go a bit higher than what we were expecting to sell when we went into the quarter 90 days ago."
However, Apple doesn't believe it can pull the same trick during its current quarter. The company warned it expects to lose between $3 billion and $4 billion due to the shortages in its quarter that ends in June.
Apple isn't alone.
Here are the companies in the telecom industry that reported financial effects from the situation, and exactly what they said about those effects.
Table 1: Telecom and the global chipset shortage
What they said
Expects to lose between $3 billion and $4 billion in revenues in its current quarter.
Expects a "negative impact on the timeliness" of its second quarter deliveries, and a "push of revenues" until it's resolved.
Lost $15 million to $20 million in revenues in the first quarter, and expects to lose $20 million to $25 million in the second quarter.
Reported that bookings grew 19% year-over-year, but that revenue grew just 9% over the same period, and blamed the difference on a buildup in its backlog due to the shortages.
Lost millions of dollars in revenues, but didn't provide a specific figure beyond the range of "low- to mid-single-digits."
Lost a "very significant number" in revenues, but did not provide specifics.
Expects revenue to decrease in the current quarter, but did not provide details.
Source: Company reports, Seeking Alpha
Of course, shortages in components might not ultimately affect a vendor's bottom line. The analysts at Counterpoint Research noted that some companies might decide to pass the hit on down the line.
"Semiconductor shortages have affected the overall supply landscape and increased the lead times of chipset solutions for major vendors. However, we see these vendors looking to diversify their foundry strategy to alleviate chipset shortages in the second half of this year," explained Tarun Pathak, a research director at the firm, in a statement. "These shortages might push specific component prices up by 5-10% and OEMs [original equipment manufacturers] will look to absorb these cost increases by being creative with the bill of materials (BoM) and in some cases might even pass the added costs to the consumers."
“There’s just a big lag between from when a technology is developed and when [a fabrication plant]goes into construction and when chips come out,” Whitehurst told the news outlet. “So frankly, we are looking at couple of years … before we get enough incremental capacity online to alleviate all aspects of the chip shortage.”
He added that the solution to the shortage involves exploring alternative ways to meet consumer demand, such as extending the life of certain types of computing technologies and accelerating investment in semiconductor fabrication plants.
IBM is not alone in focusing on fabrication plant investments. Intel, for instance, announced plans to invest $20 billion to build two new chip factories in Arizona and, under the leadership of new CEO Pat Gelsinger, is all together overhauling its manufacturing strategy, while Samsung plans to build a $17 billion chip fab either in its home country or one of three U.S. cities, Austin, Phoenix or somewhere in Western New York.
Further, Taiwan-based TSMC has raised capex guidance to $100 billion over the next three years in an effort to increase its production capacity. On a recent earnings call, TSMC’s CEO C. C. Wei commented that the new fabrication facility won’t be available until 2023.
“And so this year and next year, I still expect the capacity tightening will continue…2023, I hope that we can offer more capacity to support our customers. And at that time, we start to see the supply chains tightening will release a little bit,” Wei said.