SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksASML Holding NV


Previous 10 Next 10 
To: BeenRetired who wrote (17017)4/8/2021 8:38:28 AM
From: BeenRetired
   of 37688
 
EUV pod guy Gudeng 2Xing output. 'Nuff said...

Gudeng, which commands the majority of the global supply of EUV pods thanks to orders from TSMC, is reportedly expanding its factory site in northern Taiwan with construction of the new facilities slated to complete at the end of 2022. The expanded site will boost the overall output at the site by over 100%, the sources suggested.

Share RecommendKeepReplyMark as Last Read


To: BeenRetired who wrote (17017)4/8/2021 8:40:34 AM
From: BeenRetired
   of 37688
 
"all poised to generate impressive sales over the next three years"...

Foxsemicon Integrated Technology, Gongin Precision Industrial (GPI), Grand Process Technology, Gudeng Precision Industrial, Marketech International and United Integrated Services (UIS), as well as backend equipment specialists such as All Ring Tech and Scientech, are all poised to generate impressive sales over the next three years, particularly those in TSMC's supply chain, the sources said.

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 11:04:47 AM
   of 37688
 
EUV @SPIE '21: "a lot of progress in one year!"...

SPIE Advanced Lithography Symposium 2021 – day 5
March 2, 2021 Chris Leave a comment

One advantage of the all-online format of this year’s symposium is that the conference can be stretched from the normal four days to five without significant cost impact. This means that several ‘live” events were spread out through Friday, including several very good keynote talks and a second tutorial talk. Jara Garcia Santaclara of ASML spoke on resist development for high-NA EUV lithography. (Jara has what I think is the world’s best job title: EUV Resist & Processing Architect. I love it!) One of the biggest concerns for high-NA EUV imaging is the need for a much thinner resist (20 nm, maybe less), with numerous consequences stemming from that fact. Metal-containing resists are the leading candidates here, since their higher absorption enables thinner resist films. This nice overview talk led well into the second Patterning Materials keynote by Rich Wise of Lam Research. A year ago, Rich introduced a new resist offering by Lam based on a dry-deposited, dry-developed metal-based material that they developed. The early results a year ago looked promising, and the updated results this year look really good. They have made a lot of progress in one year! Could it be that Lam will beat the industry track record of requiring at least one decade to introduce a new resist platform? It looks like Inpria has some competition.

Regina Freed of AMAT gave a nice keynote on etching. I especially liked learning about some of the unique challenges of DRAM manufacturing. The day ended with a very well-done tutorial talk about lithography’s endgame by Ralph Dammel. After a resist-focused history of wavelength transitions (Ralph is a consummate resist chemist, after all), he suggests (perfectly correctly, in my opinion) that 13.5 nm will be our last wavelength. This means that the end of lithography-based scaling is near, and non-scaling-based innovations in chip making (in particular, vertical scaling) will enable a continuation of Moore’s Law in a new way. I couldn’t agree more, though I would add that alternate chip architectures, new materials enabling new types of chip components, and innovations in chip design will probably keep Moore’s Law going for quite a while as well.

All-in-all, this digital forum for Advanced Lithography went better than I expected. Still, I’m looking forward to next year’s in-person version, perhaps with some of the best practices of this year’s version blended in. We shall see.

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 11:14:40 AM
1 Recommendation   of 37688
 
DUV: The overlooked wunderkind


ASML continues to make investments in its DUV systems to further improve productivity and matching. The new NXT:2050 now delivers 295 wph and the NXT:1470 delivers 300 wph. The matched overlay between an NXT:2050 and NXE:3400 is now at 1.2 nm – nearly as good as each individual system.


Not enough EUV for Intel 7nm? | SemiWiki

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 11:32:58 AM
   of 37688
 
Webinar: Annapurna Labs and Altair Team up for Rapid Chip Design in the Cloud
by Mike Gianfagna on 04-07-2021 at 6:00 am
Categories: Altair, EDA, Events

This is a story of strategic recursion. Yes, a fancy term. I just made up. If you’re not into algorithm development you can Google recursion, but the simple explanation is we’re talking about using the cloud to design the cloud. The story begins with Annapurna Labs, a fabless chip company focused on bringing innovation to cloud infrastructure, now part of Amazon. To more effectively utilize the vast resources of Amazon Web Services (AWS) to build their advanced designs, Annapurna Labs turned to Altair. Altair’s solutions made a substantial impact on these projects and the details of this successful collaboration is the subject of an upcoming webinar. Read on to learn how Annapurna Labs and Altair team up for rapid chip design in the cloud.

First, a little about the presenters. David Pellerin, head of worldwide business development for semiconductor at AWS presents the chip design side of the story. Dave has a long history in EDA, embedded software, chip design and cloud enablement. He is also an author, with several books on FPGA usage and design. Dave has the perfect background to tell the chip design side of this story.

Presenting for Altair is Andrea Casotto, chief scientist, enterprise computing core development there. I’ve known Andrea for a long time. He’s well known to a lot of folks in Silicon Valley. Andrea led Runtime Design Automation for 22 years before being acquired by Altair almost four years ago. Before that he was a researcher at Siemens. Andrea holds a Ph.D. in electrical engineering from UC Berkeley. He has forgotten more about chip design methodology than most people know. He is the perfect person to tell the cloud enablement story. I wrote about a cloud enablement presentation from Andrea here.

Now to the story told during the webinar. There are two key items covered in this event:

An explanation of Altair Accelerator™ Rapid Scaling technology and how it delivers on the promise of efficient chip design on AWS.A demonstration of how Rapid Scaling works in the Annapurna Labs chip design workflow and a discussion the business merits of this approachThe Annapurna Labs design team was managing workloads on a number of dedicated Amazon Elastic Compute Cloud (EC2) instances and they could occasionally scale up by manually adding new On-Demand instances. However, the process was not automated and led to high touch, forgotten unused compute resources, and either under-scaling or excessive scaling. When you’re dealing with essentially infinite compute resources, inefficiency can get out of hand quickly. The team at Annapurna Labs is designing some very sophisticated technology including AWS Nitro, Inferentia custom machine learning chips, and AWS Graviton2 processors, based on the 64-bit Arm Neoverse architecture purpose-built cloud server. With this kind of complexity, inefficiency can get very expensive.

By deploying a technology from Altair called Rapid Scaling, the efficiency of the design workflow at Annapurna Labs increased by a spectacular margin. You’ll need to attend the webinar to get the exact statistics and how the solution was implemented. A key part of the strategy is something called a license-first approach. The webinar shares details about how Altair’s technology was deployed and what the impact was on the Annapurna Labs design workflow. You’ll be impressed with the results.

The webinar will take place in two time zones, 11:00am CET and 2:00pm EST on April 28. You can choose your preferred time zone and register for the event here. If you’re considering a move to the cloud and are concerned about how to manage costs, I strongly recommend you attend this webinar to see how Annapurna Labs and Altair team up for rapid chip design in the cloud.

P.S.
How much EUV chip demand comes from Cloud & Silicon Brain guys.
Speaking technically?
A whole bunch.
Where are the shill/"expert" WAGs?

ASML

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 12:05:56 PM
1 Recommendation   of 37688
 
Atmosic Technologies’ New Reference Designs Bring the Power of Photovoltaic-Harvested Energy to IoT Manufacturers
SHANNON DAVIS
5 MINS AGO
00 VIEWS

Atmosic™ Technologies today announced it has released its new ATM3 series of IoT reference designs, expressly developed to optimize power savings with photovoltaic energy harvesting to provide manufacturers with flexible, compact and cost-efficient design possibilities for Bluetooth connected devices. The reference designs integrate Atmosic’s award-winning M3 Bluetooth 5 system-on-chip (SoC) with energy harvesting technology.

Atmosic’s Lowest Power Radio and On-demand Wakeup technologies deliver up to ten times the power efficiency of competitive solutions. Atmosic has further enhanced power efficiency in the reference designs with integrated photovoltaic energy harvesting that significantly extends the battery life of IoT and consumer devices, enabling batteries to last the entire lifetime of the device to achieve “forever battery” life, or so they may operate without batteries, enabling “battery free” devices. Atmosic is providing photovoltaic energy harvesting designs for consumer applications as well as industrial designs.

The three reference designs – one for remote controls, another for keyboards and the third for beacons/sensors – are being released in Q2 of 2021, during which time Atmosic is offering demonstration units, evaluation kits and “how to” design collateral to qualified manufacturers. To get more information about the promoted designs and supporting information and tools, please contact info@atmosic.com.

“These reference designs will make it easy for IoT and consumer product designers and manufacturers to create remote controls, keyboards and beacons/sensors that have ‘forever battery’ life or are completely battery free, thanks to the power and design efficiencies from Atmosic’s lowest-power BLE and photovoltaic energy harvesting technology,” said Srinivas Pattamatta, VP of Marketing & Business Development, Atmosic. “These are yet another set of proof points that showcase how focusing on low power and energy harvesting in every aspect of design will dramatically reduce, and in many cases eliminate altogether, the IoT’s dependence on batteries.”

With Atmosic’s photovoltaic technology maximizing the solutions’ power-capture qualities, each of the reference designs only requires a very compact photovoltaic cell (the miniature equivalent of a solar panel) that fits compactly within the end-product design to capture ambient sunlight or indoor light, which is then stored to be used as needed. The designs offer a variety of energy storage options, with each having been developed with the goal of the end product requiring as little power as possible to operate, thus driving operational efficiencies in end use deployments.

The benefits of “forever battery” life are lower operational hassles and costs with the operator or end user rarely or never having to expend resources to replace batteries. In industrial implementations of beacons, for example, hundreds and even thousands of beacons may be deployed in a manufacturing plant, shopping center or entertainment venue so the costs of battery replacement can be expensive – taking into consideration both the cost of new batteries and labor – in addition to being time consuming. For personal applications such as remotes and keyboards, the reliable operation of the device is a huge benefit – the user does not have to bother with the inconvenience of replacing batteries. And the benefits extend beyond immediate user advantages to the more pressing environmental need to dramatically reduce battery dependence among the increasing number of deployed IoT devices worldwide.

Each of the new reference designs Atmosic is releasing this quarter offer a unique set of design features optimized for the product’s specific use case and design requirements. The ATM3 reference design series features Atmosic’s ultra-low power Atmosic BLE with a power management unit (PMU) integrated directly onto the BLE chip to achieve space and cost efficiencies. The intelligent PMU features direct connection to the photovoltaic (PV) cell to maximize harvesting efficiency, and delivers the energy required for the BLE operation in real-time while also storing excess energy not needed for immediate use.

P.S.
Bit intense NBT Energy Harvesting about to start.
3nm and below ideal.
EUV oh soooo enabling.

ASML

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 12:13:26 PM
1 Recommendation   of 37688
 
Insight LiDAR Develops Industry’s First “Gesture Detection” Sensing Technology for Autonomous Vehicle LiDAR Systems
SHANNON DAVIS
4 MINS AGO
00 VIEWS

Autonomous Vehicle (AV) technology company Insight LiDAR ( www.insightlidar.com) today announced that its product, Insight 1600, is the first LiDAR with the combined high resolution and velocity detection to enable pedestrian gesture recognition. The new capability — demonstrated here — can be used by AV perception teams to quickly and accurately predict the actions of pedestrians.

Vice President of Business Development Greg Smolka explains, “When humans drive, we’re constantly scanning the environment around us. We’re watching for cars moving into our lane and looking at nearby pedestrians to see what they might do. For example, if a pedestrian looks both ways at an intersection, drivers understand that that person intends to cross the street.”

According to Insight LiDAR Vice President Chris Wood, detecting these subtle pedestrian movements that convey intent is an important safety capability that has eluded AV developers until now.

“When we initially designed Insight 1600, we expected its ultra-high resolution and instantaneous low velocity detection with every pixel to be critical in making vehicle decisions, especially regarding other vehicle movement,” Wood said. “However, we’ve been surprised by all the ways perception teams are using this critical information. From separating close objects and more accurately identifying distant ones to now predicting pedestrian movement, we’re seeing how important this data is to safe AV operation.” Wood reports that Insight LiDAR’s FMCW sensors are believed to have the lowest minimum detectable velocity on the market.

P.S.
Still very early Auto days.
L4 & 5 where all the bits are.
Bit intensity ONLY soars from here.

ASML

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 12:20:16 PM
   of 37688
 
[Intel] Habana AI Accelerators Chosen by the San Diego Supercomputer Center to Power Voyager Research Program
SHANNON DAVIS
12 MINS AGO
00 VIEWS

Habana Labs, a developer of purpose-built deep learning accelerators for a wide range of data centers, today announced that its artificial intelligence (AI) training and inference accelerators have been selected by the San Diego Supercomputer Center (SDSC) at UC San Diego to provide high-performance, high-efficiency AI compute in its Voyager supercomputer. Scheduled to be in service in the fall of 2021, Voyager will be dedicated to advancing AI research across a range of science and engineering domains. The system build-out, as well as ongoing community support and operations, are funded by the National Science Foundation.

The Voyager supercomputer will employ Habana’s unique interconnectivity technology to efficiently scale AI training capacity with 336 Gaudi processors, which are well architected for scaling large supercomputer training systems. Gaudi is the industry’s only AI processor to natively integrate ten 100-Gigabit Ethernet ports of RoCE RDMA v2 on chip, enabling flexibility of scaling and avoidance of throughput bottlenecks that can limit scaling capacity. The Voyager system will also employ 16 Habana Goya processors to power AI inference models.

“With innovative solutions optimized for deep learning operations and AI workloads, Habana accelerators are ideal choices to power Voyager’s forthcoming AI research,” said Amitava Majumdar, head of SDSC’s Data Enabled Scientific Computing division and principal investigator for the Voyager project. “We look forward to partnering with Habana, Intel and Supermicro to bring this uniquely efficient class of compute capabilities to the Voyager program, giving academic researchers access to one of the most capable AI-focused systems available today.”

Habana Gaudi AI training and Goya inference processors are architected to drive performance and efficiency in AI operations. They will provide data scientists and researchers with access to Voyager with flexibility to customize models with programmable Tensor Processor Cores and kernel libraries, and ease implementation with Habana’s SynapseAI® Software platform, supporting popular machine learning frameworks and AI models for applications such as vision, natural language processing and recommendation systems.

Supermicro, a Voyager technology partner and global leader in enterprise computing, storage, networking solutions, and green computing technology, will provide Habana-based AI systems to SDSC for Voyager:

Supermicro X12 Gaudi AI Training System (SYS-420GH-TNGR) featuring eight Gaudi HL-205 cards paired with dual-socket 3rd Gen Intel® Xeon® Scalable processorsSupermicro SuperServer 4029GP-T featuring eight Goya HL-100 PCIe cards for AI inference, paired with dual-socket 2nd Gen Intel® Xeon® scalable processors.“Combining Supermicro’s advanced application-optimized server and storage hardware with Habana’s AI training and inference products is precisely the best solution for SDSU’s multi-year Voyager AI project,” said Ray Pang, Vice President Technology and Business Enablement, Supermicro. “We continue to work closely with leading innovators to deliver solutions for computationally intensive projects worldwide at leading research and HPC environments for science and medical discovery, compute, and leading-edge AI solutions.”

“We are honored that Habana’s AI processors have been selected to power the AI workloads that will run on San Diego Super Computing’s Voyager supercomputer,” said Eitan Medina, Chief Business Officer at Habana. “This implementation of our Gaudi and Goya products showcases how top academic institutions like SDSC can harness efficiency and performance to effectively address the growing demands of AI research workloads.”

The first three years of Voyager’s operation will be the Testbed Phase, during which SDSC will work with select research teams from astronomy, climate sciences, chemistry, particle physics, and other fields to gain AI experience and insights leveraging Voyager’s unique features. Throughout the Testbed Phase, SDSC will share experiences with the AI research computing community and documentation developed during the Testbed Phase to serve as a resource for an expanded user base.

“The level of performance and efficiency that Voyager will require is precisely what Intel architectures are designed for,” said Trish Damkroger, vice president and general manager of Intel’s High Performance Computing group. “Our Xeon Scalable processors coupled with Habana AI accelerators will ensure Voyager’s users have the HPC and AI capabilities they need to power their game-changing research.”

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 1:10:07 PM
   of 37688
 
AWS straps Python support to its automated CodeGuru tool, slashes prices ["up tp 90%"] – just don't go over 100,000 lines
Or the cost triples, which is one way to encourage concise programming

Tim Anderson
Wed 7 Apr 2021 // 19:06 UTC
SHARE

AWS has declared Python support in its automated code review system CodeGuru production ready, as well as reducing the price by "up to 90 per cent."

Our first look at the CodeGuru preview in late 2019 was disappointing. We had trouble getting it to make any recommendations, and the price at $0.75 per 100 lines of code seemed excessive – though any code review system is well worth it if it finds issues that prevent bugs or security problems making their way into production.

Since then, AWS has made a number of improvements, including a preview of Python support (alongside Java) in December last year. "We analyzed large code corpuses and Python documentation to source hard-to-find coding issues and trained our detectors to provide best practice recommendations," said the company.

There is also a catch: if developers perform more than two "full repository scans" there is a further $10 fee per scan

Python support is now generally available, and AWS said it has extended coverage with over 40 new rules and three new detectors, these referring to the categories of issues CodeGuru can identify.

The new detectors cover code maintainability, which claims to identify code complexities among other things, input validation, and resource leaks. These are in addition to existing detectors, which include correct use of AWS APIs, Java and Python best practices, concurrency issues, leak of sensitive information, common coding errors, and unnecessarily duplicated code.

The company has also had a look at its pricing for CodeGuru, needed because the old model could prove expensive. The mechanism for the code analysis has always been that the developer associates the service with a code repository, and analysis is triggered by code commits.

Supported repositories are the little-used AWS CodeCommit, Atlassian Bitbucket, GitHub, both cloud and self-hosted, and code dumped into Amazon S3.

The new pricing model is "a fixed monthly rate determined by the total lines of code across off of [a customer's] on-boarded repositories," AWS said this week, charged at $10 per month for the first 100,000 lines of code, and $30 for each additional 100k lines of code.

We are not sure why the price escalates rather than reducing as you add more code; maybe it will help to discourage code bloat. There is also a catch: if developers perform more than two "full repository scans" there is a further $10 fee per scan.

Despite these caveats, the service does seem a lot more affordable than before; the company states that it is "a price reduction of up to 90 per cent."

While this sounds impressive, some users of the service apparently still struggle with the issue we encountered with the early preview: that CodeGuru refuses to make any recommendations. This discussion on the AWS developer forum states that "on a codebase of nearly 100,000 lines, only 4 recommendations. All not relevant."

Getting the balance right for this type of automated code scan is challenging. If the developer sees thousands of recommendations, they may just be ignored. Few or none raises the strong suspicion that the service is not working correctly.

This is a crowded market; there are numerous static analysis tools out there, and IDEs like Eclipse, IntelliJ IDEA, and Visual Studio come with built-in tools. These will not pick up AWS SDK best practices, nor do they have the resources of AWS machine learning behind them, but having improved its pricing, the key question is how effective it is at coming up with useful recommendations. ®

Share RecommendKeepReplyMark as Last Read


From: BeenRetired4/8/2021 1:25:22 PM
   of 37688
 
Optane:
Ice Lake scales MemVerge in-memory SW to new heights

By Chris Mellor
April 8, 2021

Updated Memory Machine software from MemVerge runs applications faster with support for Ice Lake CPUs and increased memory capacity.

MemVerge CEO Charles Fan issued this announcement: “Memory Machine v1.2 is designed to allow application vendors and end-users to take full advantage of Intel’s latest Xeon Scalable processor and Optane memory technology.”

MemVerge’s Big Memory software virtualizes DRAM and Optane persistent memory (PMem) tiers into a single resource pool to host applications and their working data set in memory and so avoid making time-sapping storage IO calls to SSDs or disk drives.

Blocks & Files diagram showing Memory Machine concepMemory Machine 1.2 supports four to 80 Ice Lake cores and up to 6TB of DRAM + Optane PMem 200 storage-class memory per Ice Lake CPU. There is 32 per cent more bandwidth than with Optane PMem 200 than PMem 100 drives.

Newly-announced Ice Lake gen 3 Xeon CPUs run faster than gen 2 Xeons which speeds up in-memory apps. Ice Lake also supports more memory sockets – 8 rather than 6 – giving 2TB DRAM capacity per CPU, instead of the prior 1.5TB. Ice Lake Gen 2’s Optane PMem support bulks up the overall per-CPU memory capacity to 6TB from gen 2 Xeon’s 4.5TB maximum. This extra capacity means more and larger in-memory applications can run and execute faster.

Storage Review testing showed that Memory Machine 1.2 running with dual 40-core Ice Lake CPU, 512GB DRAM, and 2TB of Optane PMem 200 provided 2x read performance and 3x write performance of a dual 26-core gen 2 Cascade Lake Xeon system with 192GB DRAM and 1.5TB of PMem 100 capacity.

MemVerge’s press announcement includes a quote from Mark Wright, technology manager for Chapeau Studios: “Initially, we opened a poly-dense scene in Maya and it took two-and-a-half minutes [from storage]. Then, we opened a scene from a snapshot we’d taken with Memory Machine and it took eight seconds.”

V1.2 Memory Machine adds:

Centralised Memory Management for configuration, monitoring, and alerts for DRAM and PMem across the data centre,Redis and Hazelcast Cluster high-availability through coordinated in-memory snapshots to enable instant recovery of the entire cluster,Double the OLPT performance of Microsoft SQL Server on Linux ,Support for QEMU-KVM hypervisor with dynamic tuning of DRAM:PMem ratio per VM, and minimised performance degradation caused by noisy neighbours, Autosaving and in-memory snapshots allow animation and VFX apps to provide Time Machine capabilities that allow artists to share workspaces instantly and recover from crashes in seconds.MemVerge expects the software upgrade will accelerate single-cell genome analytics but has not yet published figures demonstrating this.

The company has joined the CXL consortium, which is developing coherent bus technology to enable remote access to pools of memory. The company has also set up labs at Arrow, Intel, MemVerge, Penguin Computing, and WWT that are equipped for Big Memory demonstrations, proof-of-concept testing, and software integration.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10