SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksASML Holding NV


Previous 10 Next 10 
From: BeenRetired11/14/2016 10:20:09 AM
   of 22991
 
“Synopsys Advances Test and Yield Analysis Solution for 7-nm”..............................................

7nm will be a huge boon for innovation.

3D interposer will be vital. Lagging edge discrete chips doomed.

Bonanza.

Cymer/HMI/Zeiss/ASML.



Synopsys Advances Test and Yield Analysis Solution for 7-nm Process Node

Mon November 14, 2016 9:05 AM|PR Newswire|About: SNPS

PR Newswire

MOUNTAIN VIEW, Calif., Nov. 14, 2016 /PRNewswire/ --

Highlights:

Innovative slack-based cell-aware test for 7-nm designs increases defect coverage

FinFET SRAM defect modeling and test algorithms enable efficient test and repair of 7-nm memories

New diagnostics and yield analysis support for 7-nm reduce turnaround time

Synopsys, Inc. (SNPS) today announced it expanded its test and yield analysis solution targeting FinFET-specific defects to enable higher quality testing, repair, diagnostics and yield analysis of advanced 7-nanometer (nm) SoCs. To improve defect coverage, Synopsys has been collaborating with several semiconductor companies to advance testing and diagnostics methods for logic, memory and high-speed mixed-signal circuits targeted for manufacture with 7-nm processes. These collaborations are enabling rapid deployment of new functionality within Synopsys' synthesis-based test solution, featuring TetraMAX® II ATPG, DesignWare® STAR Memory System®, and DesignWare STAR Hierarchical System.

Leading semiconductor companies ramping up design capabilities for emerging 7-nm processes are facing increasing test quality and yield management challenges. To address these challenges, Synopsys' test solution delivers several innovative technologies that target defects occurring more frequently at emerging process nodes. For logic circuits, new modeling techniques, such as resistance sweeping, improve the ability of slack-based cell-aware tests to detect defects such as intra-cell partial bridges that are more prevalent with advanced FinFET processes. For embedded memory test and repair, the STAR Memory System solution incorporates custom algorithms based on silicon learning at the industry's top silicon foundries to detect and repair defects exemplified by resistive fin shorts, fin opens and gate-fin shorts. Furthermore, the DesignWare STAR Hierarchical System enables high coverage manufacturing and characterization test patterns for the 7-nm DesignWare PHY IP to be efficiently applied through the SoC hierarchy.

To accelerate diagnosis of 7-nm yield issues, defect isolation to specific areas within design cells is possible through new support of cell-aware descriptions in the database shared between TetraMAX II ATPG and Yield Explorer® solutions. The combination of test and diagnostic advances increase 7-nm defect detection and speed up failure analysis and yield ramp in production manufacturing environments.

"The growing complexity and process variation found with advanced 7-nm FinFET processes requires improved test and yield technologies," said John Koeter, vice president of marketing for IP and prototyping at Synopsys. "Our IP design teams are leveraging TetraMAX ATPG as well as STAR Memory System and STAR Hierarchical System test, repair and diagnostic solutions to help multiple customers designing with 7-nm IP improve their product quality and yield, while accelerating their time to market."

"As a leading provider of comprehensive test and yield solutions, Synopsys is committed to helping designers meet their growing challenges of higher quality and faster yield ramp," said Bijan Kiani, vice president of product marketing for the Design Group at Synopsys. "Through our on-going collaborations with leading semiconductor companies worldwide, we are delivering innovative solutions to address the specific requirements for advanced FinFET processes. These innovations will enable our customers to rapidly adopt 7-nm technologies to meet their goals for high-performance SoC products

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/14/2016 10:27:43 AM
   of 22991
 
"Mellanox Drives Virtual Reality To New Levels"............................................

Just the start...of a VR bonanza.

Cymer/HMI/Zeiss/ASML.

Mellanox Drives Virtual Reality To New Levels With Breakthrough Performance
Mon November 14, 2016 8:30 AM|Business Wire|About: MLNX

Demo of Ultra-Low Latency Long Distance Virtual Reality with 100Gb/s EDR InfiniBand to Take Place at SC’16

SALT LAKE CITY--(BUSINESS WIRE)-- Mellanox ( MLNX)® Technologies, Ltd. (NASDAQ:MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that it will showcase a Virtual Reality over 100Gb/s EDR InfiniBand demonstration at the Supercomputing Conference, Nov. 14-17, Salt Lake City, Mellanox booth #2631.

Mellanox and Scalable Graphics will showcase an ultra-low latency solution that presents the ultimate extended virtual reality experience for rapidly growing industry markets including computer aided engineering, oil and gas, manufacturing, medical, gaming and others. By leveraging the high throughput and the low latency of Mellanox 100Gb/s ConnectX®-4 InfiniBand, Scalable Graphics VR-Link Expander provides a near-zero latency streaming solution for bringing an optimal Virtual Reality experience even over long distances.

“As opposed to our Ethernet based solution, which requires H.264 encoding of the video stream to cope with Ethernet bandwidth constraints, our InfiniBand based VR-Link Expander allows us to send raw image data that eliminates the last milliseconds of overhead,” said Christophe Mion, Chief Technology Officer at Scalable Graphics. “Thanks to Mellanox’s 100Gb/s ConnectX-4 InfiniBand it’s impossible to determine if the virtual reality PC is local or remote.”

“Expanding industry markets are rapidly adopting virtual reality as a business and training tool in the enterprise, as well as consumer segments,” said Scot Schultz, Director HPC/Technical Computing at Mellanox. “From creating large scale construction of a new campus, to emergency response training and education in health-care and military applications, Mellanox InfiniBand has ultra-low latency and the high bandwidth needed to drive real-world VR applications.”

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/14/2016 10:52:24 AM
   of 22991
 
“Transform the Customer Experience by Unlocking the Value of IoT Data”....................................

This is just the start…of IoE bonanza.

Cymer/HMI/Zeiss/ASML.



Oracle Service Cloud Enables Brands to Transform the Customer Experience by Unlocking the Value of Internet of Things Data

Mon November 14, 2016 8:00 AM|PR Newswire|About: ORCL

PR Newswire

REDWOOD SHORES, Calif., Nov. 14, 2016 /PRNewswire/ -- Oracle today announced an innovative new solution that enables brands to quickly and efficiently leverage insights from the Internet (HHH) of Things (IoT) to power smart and connected customer service experiences. Powered by a packaged integration between Oracle Service Cloud and Oracle IoT Cloud, the new solution helps brands enhance the customer experience, increase operational efficiency and reduce costs by using IoT data to predict customer needs and proactively address customer service issues.

The explosive growth of the Internet of Things gives organizations the opportunity to deliver innovative new services faster and reduce risk by connecting, analyzing and integrating data-driven insights from connected "things" into business processes and applications. To help brands capitalize on this opportunity to drive next-generation customer service experiences, Oracle (ORCL) has introduced a new packaged integration between Oracle Service Cloud and Oracle IoT Cloud. The new IoT Accelerator is an open source integration that also includes implementation documentation to easily configure, extend and deploy.

"The Internet of Things is fundamentally changing the way consumers interact with brands and in the process, it is creating volumes of data that organizations can leverage to transform the customer experience," said Meeten Bhavsar, senior vice president, Oracle Service Cloud. "By delivering a packaged integration between Oracle Service Cloud and Oracle IoT Cloud, we are able to accelerate the time to value, while lowering the complexity of IoT projects. For brands, this also means they can easily take advantage of IoT data and make it actionable across engagement channels to deliver exceptional customer service experiences."

Oracle Service Cloud helps brands seamlessly integrate IoT device data into existing omni-channel operations. For example, Denon + Marantz, a leading provider of premium branded equipment, is leveraging customer insights from more than 200,000 connected devices globally to deliver personalized, positive and consistent customer experiences worldwide.

"Denon and Marantz products have always provided a high quality, immersive musical experience and with IoT data, we now have the opportunity to extend that first-class experience to our customer service team," said Scott Strickland, CIO, Denon + Marantz. "Leveraging Oracle Service Cloud's IoT integration capabilities, we have been able to improve our customers' experience and increase our internal efficiency and knowledge base. In addition, we can leverage IoT information via the Oracle Marketing Cloud to target campaigns based on how a consumer actually uses the product and not how we think they use it."

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/15/2016 12:01:02 PM
   of 22991
 
June convection oven with Tegra processor...............................................................

Everything will get brainer and more connected...bit intense.
Bonanza.
Cymer/HMI/Zeiss/ASML.

Power in a subtle design June, the company that makes the oven of the same name, was founded by two tech industry vets: CEO Matt Van Horn, who co-founded Zimride (now Lyft), and CTO Nikhil Bhogal, who previously worked at Apple. The Silicon Valley background is evident when you look at the guts of the June oven. The appliance runs on an Nvidia Tegra processor, which companies commonly use in mobile devices. It connects to your home's Wi-Fi so you can control the June remotely from your iOS device and see a live stream of your food as it cooks. A high-definition camera built into the top of the oven makes the live stream possible.

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/15/2016 12:08:02 PM
   of 22991
 
shill outlet cnbc: AI & Big Data huge... Duh....................................

Today, 2 shill parrots.

Years late. This isn't news. This is ancient history.
toooo busy providing forum for shills' contort n distort (presented as "news").

same old krapp.
different day.
with the best government (regulation) money can buy.

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/15/2016 12:29:42 PM
   of 22991
 
NVDA: "World’s Most Efficient SuperComputer Powered by Pascal GP100".......................

NVIDIA Unveils DGX SATURNV – World’s Most Efficient SuperComputer Powered by Pascal GP100, Delivers 9.46 Gigaflops/Watt

NVIDIA has announced their latest DGX SATURNV Supercomputer that is designed to build smarter cars and next generation GPUs. The DGX SATURNV is termed as the most efficient supercomputer and utilizes NVIDIA Pascal GPUs.

NVIDIA’s DGX SATURNV SuperComputer Is The World’s Most Efficient – Utilizes Tesla P100 GPUsThe DGX SATURNV is ranked 28th on the Top500 list of Supercomputers and is also the most efficient of them all. The Supercomputer houses several DGX-1 units, which is NVIDIA’s custom designed server rack based on their Tesla P100 graphics chips. Right now, the most efficient machine on the Top500 list is rated at 6.67 Giga Flops/Watt. The NVIDIA designed DGX SATURNV delivers an incredible 9.46 GigaFlops/Watt which is a 42% improvement.

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/15/2016 1:43:25 PM
   of 22991
 
"Weebit ReRAM".................................................................................................

...while the slime street shills contort n distort about 20th HDD hotbox PCs...............

Weebit ReRAM technology transferred to Leti

Weebit Nano, the ReRAM start-up, has transferred its SiOx ReRAM from Rice University’s facilities in Houston, Texas, to Leti’s pre-industrialisation facility in Grenoble, France.

By David Manners 15th November 2016

Weebit Nano was founded in 2014 to develop a memory technology invented by Professor James Tour of Rice University with the potential to be 1000 times faster, more reliable, more energy-efficient and cheaper than flash.

Initial SiOx experiments at Leti’s pre-industrialisation facility confirm that Weebit’s nano-porous SiOx process is reproducible.

The next step is the development of a 1,000 bit array, followed by the development of a 1-million-bit array, which Weebit expects will demonstrate the ability to produce memory components for mass-storage applications.

Leti is expected to release a detailed report on the development process and optimisation of the technology in Q1 2017.

The report will outline plans to continue the development of the SiOx technology towards the creation of a 40nm ReRAM cell, which is expected in late 2017.

Weebit believes that achieving this milestone will open discussions with leading players in the semiconductor industry and pave the way towards commercialisation.

Once commercialised, Weebit’s technology will enable devices such as smartphones to have capacities of more than 1 terabyte (TB). The aim is to replace Flash.

Weebit Nano re-listed on the Australian Stock Exchange in August after completing its reverse takeover of iron ore company Radar Iron and raising A$5.04 million in a capital raise.

The money Is going towards R&D and fabrication of the Weebit technology, sales and marketing, business development, and expenses associated with the acquisition of Radar Iron.

The company’s executives are based in Tel Aviv, Israel, as are its internal R&D team.
per.

David Perlmutter, formerly of Intel, is Chairman of Weebit

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/15/2016 2:07:06 PM
   of 22991
 
"XiP memory...ultimate solution for intelligent IoT".................................................

IoE stuff will simply explode in the 21st.
This is just the start....of the bit intense Age.
The EUV/ArF Age.
Cymer/HMI/Zeiss/ASML.

EcoXiP – System Accelerating NVM
OverviewDesigned from the ground-up to solve the challenges of XiP memory designs, Adesto’s new EcoXiP (ATXP Series) is the ultimate solution for intelligent IoT systems.

EcoXiP non-volatile memory replaces expensive, energy-inefficient architectures, making power and performance trade-offs unnecessary in a wide range of connected devices. EcoXiP more than doubles processor performance, lowers system power consumption and reduces system cost.

EcoXiP also offers users a range of power management features that provide the best standby power available in a XiP memory solution and features enhanced security with One-Time Programmable security registers.

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/16/2016 6:39:34 AM
   of 22991
 
Google big machine learning, AI push by adding GPUs.....................................................................

"It will be able to run more efficiently thanks to the addition of GPUs to the CPUs."
Bit intensity....on steroids.
Nuff said.
Cymer/HMI/Zeiss/ASML.

Google is making a big machine learning and AI push in cloud services

Today, Diane Greene, the SVP for Google Cloud, announced a new push in Machine Learning and AI. There’s a new group under her division that will unify some of the disparate teams that had previously been doing machine learning work across Google’s cloud. Two women will take charge of the new team: Fei-Fei Li, who was director of AI at Stanford, and Jia Li, who was previously head of research at Snap, Inc. As Business Insider notes, Fei-Fei Li was one of the minds behind the Snapchat feature that lets you attach emoji to real-world objects in your snaps.The news came at the top of a slew of more announcements about the product roadmap for Google’s cloud services and how they’re expanding their use of machine learning. The announcements were all aimed at showing how Google’s cloud services include more than just renting time on a server — that it can provide services to its enterprise customers that are based on its machine learning algorithms. Those services include easier translation, computer vision, and even hiring.

For example, Google is talking up how it’s improving the infrastructure for Google Cloud. It will be able to run more efficiently thanks to the addition of GPUs to the CPUs its system already uses. Graphical processors are especially good at training machine learning systems more quickly. Google has also added some security layers to the GPU, something it claims isn’t necessarily common on other cloud platforms. So, Google says, there won’t be any data from a previous customer sitting in any of the GPU’s caches when the next customer starts spinning it up for their tasks. They’ll be available in 2017.

Google is also unifying its “cloud vision” API so the same system will be able to identify logos, landmarks, labels, faces, and text for OCR — making it simpler to implement. These systems will run on “Tensor Processing Units,” new hardware that’s optimized for Google’s TensorFlow platform. Google had previously unveiled the TPUs, but the new news today is that it’s cutting the price for “large scale deployments” by 80 percent.

Its natural language API is now globally available. It will be able to detect more “granular sentiment” in English, Spanish, and Japanese and also more “entity types” than it had in beta. Google’s natural language analysis will be able to handle morphology and syntax analysis. There’s also a new “premium translation” service.

Finally, Google is also introducing a new machine learning-based "Jobs API," which will apparently assist companies in doing massive "burst hiring" of hundreds of new employees. It allows computers to match up job openings with potential hires. Career Builder and Dice are signed up to use it, as is FedEx, Google says.

Share RecommendKeepReplyMark as Last Read


From: BeenRetired11/16/2016 6:48:07 AM
   of 22991
 
Google PhotoScan...............................................................................

This is just the start...of The Mother of All Paradigm Shifts.
And,...
It will be very, very bit intense.
Cymer/HMI/Zeiss/ASML.

On the surface, Google Photos has a simple mission: to store all your pictures. Specifically, Google says it wants the service to be a home for all of your photos, and today that mission expanded to encompass the old photos you took on a point-and-shoot back in the '90s. A new app called PhotoScan was just released for iOS and Android, and it promises to make preserving the memories in your old printed photos much easier. That's not all — Google also released a number of updates and refinements to the core Photos app as well. PhotoScan is definitely the star of the show, though. According to engineers from Google who showed the app to the press earlier today, PhotoScan improves on the old "photo of a photo" technique that many now use to quickly get a digital copy of old prints; it's also a lot cheaper than sending pictures out to be scanned by a professional and a lot more convenient and faster than using a flatbed scanner.

When you open up the PhotoScan app, you're prompted to line up your picture within a border. Once you have the picture aligned, pressing the scan button will activate your phone's flash and start the process of getting a high-quality representation of the photo. Four white circles will appear in four different quadrants of the image; you'll be prompted to move your phone over each dot until it turns blue — once all four dots are scanned, the app pulls together the final image.

When moving the phone to scan each dot, the app is taking multiple images of the picture from different angles to effectively eliminate light glare — something Google cited as the biggest culprit that ruins digital pictures of photo prints. In practice, in Google's tightly controlled settings, it worked perfectly. It was easy to see how the lights in the room cast glare on the photo print and equally obvious how the app managed to eliminate it in the final scan. It's a bit of an abstract process to describe, but it worked like a charm. We'll need to test it further outside of Google's demo area, but early results were definitely encouraging.

The app also offers you the ability to adjust the crop to remove any hint of the background surface peeking into the photo, but it's otherwise a pretty minimal experience. Once you're done scanning, the app prompts you to save your scans. They're saved directly to your phone's storage; you can then upload them to Google Photos or the backup service of your choice. Google specifically said that it wanted this app to exist outside of Google Photos so that people could scan images and use whatever service they want to back them up.

Beyond PhotoScan are some noteworthy additions to the proper Google Photos app. The biggest change here is that there are a host of new photo-editing options on board. The Google+ app actually used to have a pretty robust set of editing options, but when Photos was liberated as a standalone app, the editing features were significantly culled.

As of today, Google Photos for both iOS and Android now has a entirely redesigned set of editing tools and filters. The "auto enhance" feature, which tweaks brightness, contrast, saturation and other characteristics of your photo has been improved thanks to the machine learning technology that is at the core of nearly all of Google's products. It can look at a photo and recognize what a photo editor might do to try and improve the image. Auto Enhance has long been a pretty solid feature, so seeing it continue to get smarter and better is definitely a good thing.

If you want to make further adjustments, the simple "light," "color" and "pop" sliders that were in the previous Google Photos app have been greatly expanded. Now, you can tap a triangle next to "light" or "color" to see a view with a host of more granular editing tools like exposure, contrast highlights, saturation, warmth and so on. Those tools aren't right in your face, so people who don't want to dive in can still make adjustments — but those who really want to go deep on editing their pictures will surely appreciate the option. I used to be a big fan of the Google+ photo editing tools so seeing these features come back is very welcome.

Google called out two of those adjustments in particular as things that only it can do with its vast store of photographic information. A new slider called "deep blue" saturates blues in an image like the sky or water to make them more vibrant, and it knows to specifically target those hues while leaving others unchanged. There's also a skin tone filter that can adjust saturation specifically on a subject's skin without altering the rest of the image. Other editing programs have similar filters, but Google says that this one is particularly accurate because of the millions of photos it has analyzed — it just has a better sense of what is skin and what isn't than other editors.

Lastly, Google added 12 new filters (of course it did) that take advantage of machine learning to be a little smarter than the average option. Rather than always slapping a default set of adjustments on a picture, Google Photos will make subtle improvements to the image first — it sounds like a combination of auto enhance as well as a filter. But those enhancements will be optimized to work well with the filter you're adding. It sounds nice, and the filters looked good on the images Google was showing off, but we'll need to spend some time playing around with it to see if they're really any better than what Instagram already offers.

Editing is the main addition to Google Photos, but there are a few other improvements here as well. If you're invited to a shared album, the app will prompt you with suggestions from your own photos to add. It's another place where Google's machine learning comes into play. And the movie maker, which can automatically select related photos and set them to a soundtrack, will be gaining some new event-focused options in the coming months.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10