SI
SI
discoversearch

 Technology Stocks | New Technology


Previous 10 | Next 10 
From: FUBHO6/28/2011 1:12:15 AM
   of 232
 
Brain-like processors created, with simultaneous data processing and storage

Posted by
Abhinav Lal
Jun 27, 2011 16:59:31 IST

thinkdigit.com

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO7/17/2011 9:12:37 PM
   of 232
 
Fabric Engine brings native performance to the browser. Today’s devices use multi-core and hybrid architectures, yet the browser uses very little of this power. Fabric Engine enables developers to build high performance applications for the web. No longer is the browser constrained to basic web apps. Fabric opens up possibilities in building full multi-threaded 3D, graphical applications running in the web browser. Fabric applications use regular HTML5 and JavaScript for the UI, but use a high-performance multi-threading engine to achieve blazing speeds in the web browser.

fabric-engine.com

fabric-engine.com

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO7/21/2011 1:02:46 PM
   of 232
 
The next transistor: planar, fins, and SoI at 22nm

Ron Wilson
7/19/2011 3:59 PM EDT

eetimes.com

The race is on to redefine the transistor. Process developers working on 22/20nm logic processes appear to be scrambling to introduce new kinds of transistors for this node. Intel has made a huge fanfare over their tri-gate device. Many researchers are pushing finFETs. A powerful group of mainly European organizations, including ARM and US-based Globalfoundries, is serious about fully-depleted SoI (fdSoI.) And recently, start-up Suvolta and Fujitsu described yet another alternative.

All this might appear fascinating for device designers, and irrelevant to chip designers. But decisions on transistor design will have profound downstream impacts—from the craft of cell design to the work of physical-design teams, and even to the logic designer’s struggles with power and timing closure.

What’s the problem?
Why are process engineers so determined to upset the apple cart? The short answer is short-channel effects. Pursuit of Moore’s Law has continually shrunk the channel length of the MOSFET. This contraction improves transistor density and, other factor fixed, switching speed. The problem is that shortening the channel plays havoc with those other factors—about a dozen different havocs, actually, that get lumped under the label of short-channel effect. Most of these we can summarize by a generalization: as the drain gets closer to the source, it gets harder and harder for the gate to pinch off the channel current (figure 1.) The result is sub-threshold leakage current.


Figure 1. Short-channel effect eats away at the gate's control over the channel.

This battle against leakage current has been going on since at least the 90 nm node. The point of the whole high-k/metal-gate (HKMG) transition was to give the gate more control over the channel current without letting gate leakage get out of control. But by the 22 nm node, many are arguing, the planar MOSFET will have lost that war. There will be no way to deliver adequate leakage control at adequate performance. “With HKMG we addressed gate leakage,” one expert said. “Now we have to address channel leakage.”
Planar one more time?

Not everyone agrees that the planar MOSFET is history. Principal among the dissenters is TSMC, which stated in February that it would use planar transistors in its 20 nm foundry process. There are strong arguments for this position, also held—with one major caveat—by Globalfoundries.
Designers are familiar with short-channel planar MSOFETs, for all their shortcomings. This should make rescaling of cell libraries and hard IP blocks relatively straightforward. Leakage and threshold variations may be worse than at 28 nm, but the design community has tools, including aggressive power management, variation-tolerant circuits, and statistical timing analysis, to cope with these problems. And when all the issues are on the table, a foundry must do what its lead customers—FPGA vendors, networking IC giants, and to some extent ARM—ask of it.

Still, there is much skepticism. “TSMC stated that they would use a replacement-metal-gate planar process at 20 nm,” observed Novellus vice president Girish Dixit, “but that determination may have changed. HKMG can control leakage, but a planar transistor will still have inferior I-on/I-off characteristics.” If TSMC’s early adopters find themselves at a competitive disadvantage because of the planar transistor, they may force the giant into a finFET half-node. The confrontation would most likely arise in the mobile market, where ARM’s fabless silicon partners will face competition from Intel’s Atom processor, newly rejuvenated by that company’s 22 nm tri-gate process.

The rise of the fin

The next-transistor debate matriculated from a decade in the cloistered but technically accurate halls of process engineering conferences to the public forum with Intel’s May announcement of their 22 nm so-called tri-gate process. The roll-out, probably intended to counter ARM’s growing momentum in the mobile space rather than to advance the discussion in circuit design, significantly reduced the signal-to-noise level about new transistor technology.

Intel’s tri-gate device is a finFET, pure and simple. Industry experts dismiss Intel’s attempts to claim a significant difference. As such, it is one instance of a decade-old, industry-wide attack on short-channel effect—an effort that began at industry consortium IMEC at about the same time as it did at Intel. “Everyone in the industry has been developing finFET technology,” one process expert said. “The difference is in what they have chosen to announce.”

All finFET programs—indeed, all the approaches to next-transistors—rest on a single concept: the fully-depleted channel. Loosely, the concept is to give the gate so much control over the electric field in the channel that the gate can deplete the channel of carriers entirely. This of course eliminates the dominant conduction mechanism in the channel, and in effect turns the transistor off. But how to do that? In a planar device, the depth of the channel and effects from the junction formed between the drain and the silicon around it alter the electric field in the channel and interfere with depletion. Somehow you have to make the channel thin enough and far enough from the drain junction to permit the gate to fully deplete the conduction region.


The Next Transistor, page 2
The finFET solution is to stand the channel on its edge, above your choice of either the silicon surface or an insulating oxide layer, and to drape the HKMG gate stack over the resulting fin like a wet blanket. This fin-shaped channel is very thin (figure 2) and working from three sides, the gate can successfully create a depletion region that blocks the channel entirely.


Figure 2. Fin structures can be incredibly complex and delicate.

The finFET gives circuit designers a V-I curve they’ve only been able to dream about since 130 nm. But it also brings issues. One is simply building the devices. “Making the fins, and preserving them through subsequent processing steps, are hard tasks,” warned Applied Materials Silicon Systems Group vice president and CTO Klaus Schuegraf. “You must etch over the edges of tall structures, uniformly dope complex 3D surfaces, and lay down all the different films in the gate stack so that they conform exactly to the surface of the fin. These requirements bring about many changes in materials, and some changes in equipment. The number of mask layers won’t change much, but the number of processing steps will certainly go up.”

Fins and the rest of us

There will be issues for chip designers as well. The fin width will be the minimum process dimension. In order to form the fins, a double-patterning lithography technique—probably spacer-defined—will be mandatory. Double-patterning, in turn, will impose “very restrictive design rules,” Schuegraf said. Intel director of components research Mike Mayberry added clarification: “Most of the design rules are litho-dominated. Once you can make features at 22 nm, there are few rules that are specific to the tri-gate structure.”

FinFETs will bring changes for circuit designers, too. The most obvious one is that you can’t change the width or height of a fin to increase drive current. “One fin is one quantum of drive current,” Mayberry said. The height of the fin is determined by a polishing step, and so is constant—as much as possible—across the wafer. But the width of the fin isn’t flexible either.

This limitation, according to Dixit, is not simply due to lithography restrictions, but is mainly due to the fact that if you widen the fin, the threshold voltage starts to roll off. If you widen the fin to get more drive, you accidentally change the threshold voltage as well. Incidentally, this also means that any variation in line width at minimum geometry, just like any variation in polish depth during fin formation, translates into threshold variation at the transistor level.

To get higher current, you put more fins in parallel. Of course only being able to change drive current by fixed increments will be a new limitation for circuit designers, especially in the custom analog world. But Intel is not worried. “We’ve modeled tri-gate circuits extensively in both switch and amplifier applications, and we believe very few circuit designs will require modifications,” Mayberry said. Others are less sanguine. “For high current you have to parallel the fins,” said IMEC executive vice president of business development Ludo Deferm. ”But that requires interconnect between the transistors, and at high frequencies the interconnect resistance becomes a factor in circuit performance.”

Another route to full depletion
The supporters of fully-depleted silicon –on-insulator (fdSoI) argue that they can offer the V-I characteristics of finFETs without the problems. The fdSoI transistor is a simple planar MOSFET fabricated in the ultra-thin layer of undoped silicon atop the buried oxide layer of an fdSoI wafer. There are many advantages to the device; mostly, it is a conventional MSOFET with width scaling, no memory effect, and, according to Leti laboratory leader Olivier Faynot, your choice of either 60 percent more speed or 50 percent lower power at circuit level compared to competitive processes.

Perhaps more significant is threshold voltage control. Because the fdSoI channel is undoped, there is no problem with variations in channel doping causing threshold variations—an issue that plagues both planar and fin devices as fewer and fewer dopant atoms go into the channel. Further, there is the issue of providing multiple threshold voltages in the process. Planar and fin FETs must change threshold voltage by changing doping level: a process complexity for planar, and probably completely infeasible for fins. But fdSoI, Faynot said, can control threshold voltage dynamically by applying a back-bias voltage to the underside of the channel through the ultra-thin buried oxide.

Standing against all these advantages are three relatively non-technical issues. First, fdSoI wafers are more expensive than conventional wafers. But last week wafer vendor Soitec distributed a report from analyst IC Knowledge claiming that because of the significantly simpler processing to provide multiple threshold voltages on fdSoI wafers, the total cost of a processed wafer at 22/20 nm would be no greater for fdSoI than for planar or finFET processes.

Second, there is risk. Soitec is the only source of wafers for fdSoI, and creating the wafers requires executing the company’s oxide-deposition, wafer-splitting, and polishing steps with atomic-level precision.
Soitec delivers the wafers with a uniform 12 nm film of silicon over a similarly thin buried oxide layer. Third, there is inertia. Many senior decision-makers won’t consider anything that’s called SoI. Still some companies will press ahead. AMD, through its Globalfoundries connection, IBM, and ST are probably committed to fdSoI at 22nm. In fact Globalfoundries, which in the past has not aggressively marketed SoI to customers not already using the process, may use fdSoI as an ace up the sleeve to counter pressure from Intel and to trump TSMC. Some fabless IC vendors who have used partially-depleted SoI already, such as Broadcom, are likely to listen to this argument. Beyond this core, though, “fdSoI may just not get the attention,” one insider worried.

There is one more announced player in this race. SuVolta recently announced a process in which the start-up uses deposition to create a buried junction under the channel of a conventional bulk planar MOSFET. Reverse-biasing this junction creates a depletion region under the channel that in effect mimics the buried oxide layer of fdSoI, thinning the active region of the channel until the gate can almost fully deplete it.

The SuVolta technology is interesting, but not widely known outside of a few non-disclosure partners of the start-up company. Consequently, there has not been independent verification of the characteristics of the SuVolta mostly-depleted transistor. None the less, this may be an important alternative for smaller fabs—not unlike Fujitsu—that haven’t the funding to enter the finFET race, and don’t want to pay the extra initial cost for fdSoI wafers.

So there are the players. TSMC seems committed to supply a planar 20 nm process, at least to its initial customers. But it may do a quick revision, and offer a finFET option for mobile applications well before it releases a 16 nm process. Intel is clearly committed to its finFET. IBM, and parts of the Globalfoundries and ST capacity at 22 nm will likely be using fdSoI. Fujitsu will probably continue to exploit their joint development with SuVolta. How the other players line up will undoubtedly depend on customer demands and on early process learning from the major players. If 28 nm proved anything, the lesson was that the course of new process technology doesn’t often run smooth.

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO9/1/2011 12:12:56 PM
   of 232
 

3D ICs without TSVs?
Ron Wilson 8/30/2011 8:46 PM EDT
Two assumptions have become accepted truths in SoC planning: first, that the way forward involves 3DICs; and second, that 3DICs require through-silicon vias (TSVs.) One result has been a tremendous focus on the challenges of TSVs, and a growing realization of just how formidable those problems really will be in production. But what if the first assumption is true and the second one is false? One of the hottest water-cooler topics at Semicon West this year was a PowerPoint pitch from a new venture called MonolithIC 3D. In it, the company argued exactly this point.

“Logically, there are only two ways to make 3DICs,” explained MonolithIC 3D founder Zvi Or-Bach. “You can use TSVs to stack up prefabricated dice, or you can do monolithic 3D—successive layers of transistors and interconnect on one substrate. But everyone knows you can’t really do monolithic 3D in practice, because the temperatures you need to form the second layer of transistors destroy the interconnect stack on the first layer.”

Or-Bach’s claim is that there is a third alternative—a middle path, if you will. Essentially, Monolithic 3D proposes to use SmartCut technology—the ion cutting process that Soitec uses to make SoI wafers for AMD and IBM—to stack up consecutive layers of active silicon, the way a deli manager stacks up salami on a sandwich. How to do this without damaging the underlying layers is Monolithic 3D’s special—not secret, but heavily patented—sauce.

First, ion cutting may require some explanation. The key idea is that if you implant a thin layer of H+ ions into a single crystal of silicon, the ions will weaken the bonds between the silicon atoms in their area, creating a fracture plane (figures 1 & 2.) Judicious force will then precisely break the wafer at the plane of the H+ implant, allowing you to in effect peel off an arbitrarily thin layer. Monolithic 3D proposes this SmartCut technique to create and stack up layers of silicon.

SmartCut technology starts with implanting a layer of hydrogen ions into the silicon. The process ends with bonding the wafers together and cleaving off the donor wafer.But a stack of thin wafer slices is a long way from a 3DIC. Monolithic 3D has proposed a number of ways of doing the rest of the job without cooking or mechanically damaging the bottom layers.

All the approaches start by fabricating a conventional wafer, up through the completed top interconnect layer and passivation, to serve as the bottom wafer of the 3D stack. This first wafer can employ pretty much any silicon technology, including HKMG transistors and ultra-low-k dialectric materials. The only thing non-standard about this first wafer is that the top metal layer will include landing pads for vias coming down from subsequent layers.

Monolithic 3D part 2
The next step can vary enormously, depending on what you want to build. One of the simplest possibilities, perhaps more useful for illustration than for production, is the following. On a second wafer, diffuse in a buried N+ layer, above it leaving a P- surface layer. Grow an oxide on the surface, and activate the dopants with a high-temperature cycle. Now do the SmartCut: implant hydrogen ions into a thin plane part-way through the N+ buried layer. Then turn this prepared donor wafer over and do an oxide-oxide bond to the completed base wafer. Cleave the top wafer at the plane defined by the H+ implant. You now have a layer-cake: a completed wafer on the bottom, a thick layer of oxide where the two wafers bonded together, a thin layer of P- silicon, and on top a thin layer of N+ silicon.

At this point you can polish the surface, and using only low-temperature processes, fabricate recessed-channel transistors similar to the array transistors used in DRAM cells (RCATs.) Above the transistors you fabricate a second conventional interconnect stack, and you are done (figure 3.) You can repeat as desired to form additional active layers.

Low-temperature processing can create RCATs in the top silicon slice.This is just one example. Or-Bach and chief scientist Deepak Sekar described a similar sequence—but forming transistor sites on the donor wafer and using a carrier to transfer the slice right-side up—that allows a gate-last, HKMG process on both the donor slice and the base wafer (figure 4.) The two have outlined other ideas as well, including 3D DRAM, 3D FPGAs, monolithic displays and image sensors, and fully-redundant logic circuits.

By preparing transistor sites on the donor wafer, you can create multiple levels of HKMG transistors.There are three common factors in all these ideas. First, there are no new materials or unusual process steps required. Second, you do all the high-temperature processing for the donor wafer before bonding the wafers and cleaving, so the base wafer is never driven outside its thermal budget. And third, you keep the added layer of silicon so thin that vias all the way through to the base wafer’s top interconnect layer need be no deeper than a typical isolation trench. This allows you to form the vias with conventional moderate-aspect-ratio etch technology, allowing nearly arbitrary placement and potentially—depending on how you handle wafer alignment—very high density.

To be sure there are unresolved questions. Monolithic 3D is showing PowerPoints, not finished 3DICs. Sekar points to the stability of modern ultra-low-k interconnect stacks during wafer bonding, cleaving, and polishing operations as one such question. Others might include transistor variations induced by internal stresses in the silicon slices, and temperature management in the microscopically-thin silicon slices sandwiched between two thermally-insulating interconnect stacks.

The only way to answer such questions, Or-Bach and Sekar said, is to make

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO9/7/2011 2:52:30 AM
   of 232
 


e-beam lithography precision at optical lithography speed: Complementary lithography breaks the NGL logjam


By David K. Lam
Multibeam Corporation


September 6, 2011 -- What is semiconductor lithography’s current state? Cost is rising, debate is raging, and a solution is wanting. The chip manufacturing industry has long expected optical lithography to reach resolution limits, eventually, as IC features shrink below 193nm, the wavelength of ArF lithography. Since 1999, program after program has sought a new lithography with extreme ultraviolet (EUV) light at 13.5nm to enable continued scaling of ICs. While 193nm lithography overcame a multitude of sub-wavelength patterning challenges, optical-as-usual became increasingly complex and costly. EUV was designated as the next-generation lithography (NGL) for high-volume manufacturing (HVM).

Today, EUV is not ready for production, though 28nm logic devices are being produced, and 22nm will soon follow. EUV’s delay, following years of outstanding research, numerous technical achievements, and multi-billion dollar investment, is accompanied by very high projected cost-of-ownership (CoO). The future of semiconductor lithography remains uncertain.

The challenges in EUV productization are understandable, and traceable to the technology. The root causes boil down to:

    the success of EUVtool development depends on multiple technological breakthroughs to be achieved virtually all at the same time, and
    the success of EUV market deployment requires building out an entirely new support infrastructure.
So, despite EUV’s potential benefits, chip makers concerned about availability and affordability are searching for alternative solutions. Intel’s Yan Borodovsky broke the logjam in 2010 with the concept of complementary lithography. As optical multiple-patterning gets extraordinarily complex and costly for critical layers, another lithography technology could be used to complement optical lithography and pattern only critical layers, proposed Borodovsky. He noted that both EUV and electron-beam lithography (e-beam, EBL) could work hand-in-hand with optical lithography to manufacture advanced ICs.

The complementary technology, EUV or EBL, is not intended to supplant optical but to support it. End users adopting this approach can continue to use existing optical lithography and utilize the investments already made in optical lithography infrastructure. Complementary lithography is the most realistic solution to emerge from the debate on the future of semiconductor lithography.

E-beam’s slow speed is well known, as is its high resolution. E-beam writing systems, rooted in the same technology as the ubiquitous scanning electron microscope (SEM), have been making photomasks for 30 years. Infrastructure to support EBL already exists. In fact, EBL is exceptionally well suited for patterning critical layers:

    Leading logic fabs are adopting 1-D gridded layouts for poly and “1X” metal layers. The unidirectional design layout enables scaling of advanced logic chips;
    “Spacer” techniques in conjunction with lower-cost optical patterning are widely used to double or quadruple line density if needed;
    Critical layers comprise cutting lines (poly and metal) and holes (vias and contacts); EBL excels in cutting with shaped beams, while optical lithography may require quadruple- or octuple-patterning; and
    Cut-patterns have very low feature density, about 5%, which translates into higher throughput if the e-beam is vector-scanned, skipping 95% no-cut areas.
EBL, when used to complement optical lithography, is called CEBL (complementary e-beam lithography). Multibeam’s CEBL vector-scans shaped beams for cutting in critical layers, exploiting e-beam’s strength in resolution and avoiding its weakness in speed. The technology eliminates the magnetic field; e-beam columns are small and beam deflection is fast. A multi-column module delivers five wafers per hour. Each column is equipped with an SEM for in-situ, in-process e-beam registration to attain best alignment. CEBL needs no masks, further reducing CoO. Optimized for cutting, this technology plays a limited but crucial role.

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO10/17/2011 4:19:24 PM
   of 232
 
Nanomaterial allows computer to rewire itself
Posted on October 17, 2011 - 04:30 by Kate Taylor

Scientists at Northwestern University have developed a new nanomaterial that could allow a computer to reconfigure its internal wiring and become an entirely different device as required.

A single device could, for example, reconfigure itself into a resistor, a rectifier, a diode and a transistor based on signals from a computer. The team's already made some preliminary electroniccomponents.

"Our new steering technology allows use to direct current flow through a piece of continuous material," says professor Bartosz A Grzybowski, who led the research.

"Like redirecting a river, streams of electrons can be steered in multiple directions through a block of the material - even multiple streams flowing in opposing directions at the same time."

The material combines different aspects of silicon- and polymer-based electronics to create what the team says amounts to a new class of electronic materials: nanoparticle-based electronics.

It's composed of electrically conductive particles, each five nanometers in width, coated with a special positively-charged chemical.

The particles are surrounded by a sea of negatively charged atoms that balance out the positive charges fixed on the particles. By applying an electrical charge across the material, the small negative atoms can be moved and reconfigured, but the relatively larger positive particles have to stay put.

By moving the negative atoms around the material, regions of low and high conductance can be modulated to create a directed path that allows electrons to flow through the material.

Old paths can be erased and new ones created by pushing and pulling the sea of negative atoms. Using multiple types of nanoparticles all;ows the creation of more complex electrical components, such as diodes and transistors.

"Besides acting as three-dimensional bridges between existing technologies, the reversible nature of this new material could allow a computer to redirect and adapt its own circuitry to what is required at a specific moment in time," says graduate student David A Walker.

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO10/29/2011 10:26:27 AM
   of 232
 
Productivity Future Vision
youtu.be

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO11/1/2011 8:55:50 PM
   of 232
 
More than 48 hacks tracked to one man in ChinaAt least, we assume he was not married

01 Nov 2011 10:22 | by Edward Berridge

More than 48 chemical and defence companies were victims of a coordinated cyber attack that has been traced to one man in China.

Insecurity outfit Symantec found that systems belonging to the hacked outfits were infected with malicious software known as "PoisonIvy." It was designed to steal information such as design documents, formulas and details on manufacturing processes.

Several Fortune 100 corporations that develop compounds and advanced materials were among those hacked. Most of the victims were in the United States and United Kingdom, Symantec said.

The attacks appear to be entirely for industrial espionage, but what is interesting is that they all came from a computer system that was owned by a man in his 20s in Hebei province in northern China.

A literal translation of the guy's pseudonym was "Covert Grove"and Symantec found proof that the the same "command and control" servers used to control and mine data in this campaign were also used in attacks on human-rights groups from late April.

At this point it is not possible to tell if Mr Grove is a lone gunman or if he has only an indirect role.

Symantec's also could not rule out that Grove is a hired gun working on behalf of another party, particularly, we guess, the Communist Party.

The standard method of attack was to send emails with tainted attachments to between 100 and 500 employees at a company, claiming to be from established business partners or to contain bogus security updates.

When a victim pens the attachment, it installs "PoisonIvy" which is a Remote Access Trojan to take control of a machine.




news.techeye.net

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO11/1/2011 10:40:21 PM
   of 232
 

Micron / Samsung TSV stacked memory collaboration: a closer look

Samsung Electronics and Micron Technology have created an industry group to collaborate on the implementation of an open interface specification for a new memory technology called the Hybrid Memory Cube (HMC).






More infomation on the Hybrid Memory Cube (HMC) here.

The stated goal of the Consortium is to “…facilitate HMC Integration into a wide variety of systems, platforms and applications by defining an adoptable industry-wide interface that enables developers, manufacturers and enablers to leverage this revolutionary technology”.
Samsung has had a long list of research and commercial announcements since their initial indications that stacking DRAM was on their roadmap in 2006 [ see for example : “ Samsung presents new 3D TSV Packaging Roadmap”; “ New Samsung 8GB DDR3 module utilizes 3D TSV technology”; “ 3D Integration entering 2011”; “ Samsung Wide-IO Memory for Mobile Products: A Closer Look”; “ Samsung develops 32GB RDIMM using 3D TSV technology”.

Micron has been working for many years on TSV stacking technology and earlier this yearrevealed their intent to enter the stacked DRAM arena with what they called a hyper memory cube [see “ Hyper Memory Cube” 3DIC Technology].

It is thus of interest to understand how/why Samsung and Micron have joined forces in this new consortium.

TSV stacked memory with a controller layer addresses the so called "memory wall" problem. Essentially, DRAM performance today is constrained by the capacity of the data channel that sits between the memory and the processor. No matter how much faster the DRAM chip itself gets, the channel typically chokes on the capacity. Systems are not able to take advantage of new memory technologies because of this latency issue – they need more bandwidth.

The HMC which is now being called a “hybrid memory cube” is a stack of multiple thinned memory die sitting atop a logic chip bonded together using TSV. This greatly increases available DRAM bandwidth by leveraging the large number of I/O pins available through TSVs.
The controller layer in the HMC is the key to delivering the performance boost, allowing a higher speed bus from the controller chip to the CPU and the thinned and TSV connected memory layers mean memory can be packed more densely in a given volume. The HMC requires about 10% of the volume of a DDR3 memory module.

The interface in the control layer is totally different from current DDR implementations and thus the need for a consortium of the major players to standardize this interface.


Micron HMC : (A) schematic representation; (B) showing TSV; (C) the real module

It is claimed that the technology provides 15X the performance of a DDR3 module, uses 70% less energy per bit than DDR3 and uses 90% less space than todays RDIMMs. Current DRAM burns a huge amount of the power in laptops and phones. HMC draws less power because of the wider I/O capabilities and greater I/O bandwidth significantly cut the amount of energy needed per bit - ~ 10% of the energy per bit of a DDR3 memory module.



DIMM vs HMC: 160 Gb/sec Equivalence (Courtesy of Micron Technology)

The prototype shown by Micron and Intel is reportedly rated at 128 Gbps. In comparison, DDR3-1333 modules offer a bandwidth of 10.66 Gbps, current DDR3-1600 devices deliver 12.8 Gbps and DDR4-when commercialized reportedly will achieve 21.34 Gbps.



HMC performance vs todays memory (Courtesy of Intel developer forum 2011)

Micron and Samsung will work with fellow founding members Altera, Open Silicon and Xilinx and hopefully others, to bring the technology to market. Specification for the HMC will be finalized next year. Still to be worked out is who manufactures the HMC.

Looking a little closer we find that Intel has been working closely with Micron on this development. At the recent Intel designer forum (IDF 2011) Intel CTO Justin Rattner demonstrated the Hybrid Memory Cube towards the end of his keynote lecture which can be seen here.

It is not clear at this point whether Intel owns part of the IP or not and it is not clear why Intel is not a member of the Micron / Samsung HMC consortium, but Intel certainly had high praise for the technology which they claim will allow them to continue to “improve the interconnect within computer systems so that communication between the microprocessor, DRAM, storage and peripherals is faster and lower power with each successive generationRattenr also stated that “This hybrid-stacked DRAM, known as the Hybrid Memory Cube (HMC), is the world’s highest bandwidth DRAM device with sustained transfer rates of 1 terabit per second. It is also the most energy efficient DRAM ever built. “

The industry always needs multiple sources for a broad adoption. The cross license agreements that exist between Samsung and Micron [ here] and Samsung and Intel [ here] probably made the formation of this consortium easier to happen.

Share Recommend | Keep | Reply | Mark as Last Read


From: FUBHO11/2/2011 8:06:13 PM
   of 232
 
The end of an era: Internet Explorer drops below 50% of Web usage

http://arstechnica.com/microsoft/news/2011/11/the-end-of-an-era-internet-explorer-drops-below-50-percent-of-web-usage.ars
By Peter Bright | Published about 11 hours ago

A couple of interesting things happened in the world of Web browser usage during October. The more significant one is that Internet Explorer's share of global browser usage dropped below 50 percent for the first time in more than a decade. Less significant, but also notable, is that Chrome for the first time overtook Firefox here at Ars, making it the technologist's browser of choice.

Internet Explorer still retains a majority of the desktop browser market share, at 52.63 percent, a substantial 1.76 point drop from September. However, desktop browsing makes up only about 94 percent of Web traffic; the rest comes from phones and tablets, both markets in which Internet Explorer is all but unrepresented. As a share of the whole browser market, Internet Explorer has only 49.58 percent of users. Microsoft's browser first achieved a majority share in—depending on which numbers you look at—1998 or 1999. It reached its peak of about 95 percent share in 2004, and has been declining ever since.





Net Applications


Where has that market share gone? In the early days, it all went Firefox's way. These days, it's Chrome that's the main beneficiary of Internet Explorer's decline, and October was no exception. Chrome is up 1.42 points to 17.62 percent of the desktop browser share. Firefox is basically unchanged, up 0.03 points to 22.51 percent. Safari grew 0.41 points to 5.43. Opera has been consistently falling over the last few months, and it dropped again in October, down 0.11 points to 1.56 percent.





Net Applications


In spite of Android sales now outstripping iOS sales, iOS users are far more abundant on the Web. Mobile browsing is currently a much smaller market, with 5.5 percent of Web usage conducted on smartphones and tablets. This small market is also a lot more volatile than the desktop market. Mobile Safari was up by 6.58 points last month to 62.17 points. The biggest single loser was the Android browser, dropping 2.91 points to 13.12 percent. Symbian, BlackBerry and Opera Mini also registered falls, down 2.15 points to 2.55 percent, 0.64 points to 2.04 percent, and 0.27 points to 18.65 percent, respectively.





Net Applications


The trend graph says it all: Firefox's share is flat, with Chrome driving all Internet Explorer's losses.





Net Applications


Safari's long-term dominance in mobile is clear. Also clear is that Android's sales growth isn't at all reflected in its Web usage.





Net Applications


The upgrade trends show a familiar story. Chrome users, who for the most part receive updates automatically, switch to new versions quickly and efficiently. Chrome's "tail" is growing ever longer, though, with about 2 percent of desktop browser users—about 14 percent of Chrome users—using old versons. That number is growing every month, and it appears to be resilient.





Net Applications


Firefox retains its clean split between people on the new, rapid release versions (4-9) and those on the old stable version (3.6). The rapid release users are upgrading fairly quickly, though the cut-overs are neither as rapid nor as automated as those of Chrome. However, almost a quarter of Firefox users are sticking with version 3.6. Until and unless Mozilla produces a stable edition with long-term support, this is unlikely to change.





Net Applications


Internet Explorer, however, continues to see major usage of old versions. Internet Explorer 6 and 7, which aren't current on any supported version of Windows, are still the version used by 25.4 percent of Internet Explorer users, 13.38 percent of desktop users as a total. These are people that can upgrade to either Internet Explorer 8 (if they're using Windows XP) or Internet Explorer 9 (if they're using Windows Vista), but who have, for some reason, refused to do so. Internet Explorer 8 users appear to be switching to Internet Explorer 9 at a slow but steady rate, with the former down about a point, and the latter up by about a point.








The browser usage here at Ars Technica continues to be unusual, with Firefox and Chrome over-represented on the desktop, and Android showing a much stronger performance among mobile user than is seen on the wider Web.

A compelling case can be made that the causes for these two phenomena—Internet Explorer's decline, and Chrome's growth—are closely related. They represent the influence of the computer geek.

Ars Technica's unusual usage figures are not surprising when considering its audience: visitors to the site tend to be technologists and early adopters: Ars readers were among the first to switch to using Firefox as their browser of choice, and similarly they're leading the way with Chrome. While Internet Explorer's decline, Firefox's flatlining, and Chrome's growth have happened faster at Ars than the broader Web, the underlying trends are the same.

This is perhaps not surprising. Ars has more than its fair share of IT decision-makers, both in corporate environments and home environments (I'm sure that many of us know the perils of being the "computer guy" roped in to fix the problems plaguing friends' and family's machine). It might be a few months before a Chrome-using Ars-reading geek starts to recommend it to friends and family, or a few years before he gets approval to roll the browser out across the company whose computers he maintains, but the migration will happen. Technology decisions are usually made by technology people—and technology people read Ars, ditched Internet Explorer for Firefox a few years ago, and are now switching to Chrome.

Firefox appealed to the geek demographic by offering tabs, a wealth of extensions, and active development; geeks enjoy new things to play with, and a browser that's frozen in time, as Internet Explorer 6 was, holds no appeal. Chrome in turn offered a focus on performance and stability, even more active development, and the cachet of being built by Google. Chrome was also quick to offer obvious but useful things such as built-in, robust session restoration, and a useful new tab page (something Internet Explorer 9 replicated, and which is currently in beta for Firefox). Bundling Flash also removed a potential headache, by ensuring that a potentially buggy plugin was kept current and up-to-date. On top of all this, Google has been vocal in pushing its view of how the Web should work, with the VP8 video codec, the SPDY Web protocol, and most recently, the Dart scripting language.

A browser that doesn't appeal to this demographic won't receive the benefit of this kind of on-the-ground advocacy. Mozilla is working to bring some of Chrome's appealing features to Firefox, with its new development schedule and future features such as tab isolation, and though this is currently causing some headaches—there are continued issues with extension compatibility—Firefox's market share is for the most part holding steady. Once Mozilla can get rid of the annoying wrinkles and make updates as pain-free as Chrome's, it might start to win back the attention of the techie demographic, especially if Mozilla can come up with a viable IT-friendly long-term support option.

Meanwhile, Microsoft is strenuously avoiding this same demographic. Internet Explorer lacks small but significant creature comforts such as resizeable text boxes, built-in spell checking, and session restoration, and while it does offer certain extensibility points, they fall a long way short of those offered by Firefox, and as such, its extension ecosystem is a whole lot less rich. It's not enough for Internet Explorer to be a solid mainstream browser: the less technically engaged users who switched to Firefox because a trusted authority told them to aren't going to spontaneously switch back to Internet Explorer, even if it is good enough for their needs. They're going to wait until their techie friend next fixes their PC and tells them that they should consider switching to Internet Explorer because it's "better," just as they did for Firefox and Chrome.

Internet Explorer is still an important browser, with a userbase large enough that few developers can afford to ignore—though sites that don't need global appeal may well be able to safely ignore Internet Explorer 6—and at current rates it will remain important for a few years yet. But until and unless Microsoft makes its browser appeal to the influential geek demographic, it looks as if Internet Explorer has nowhere to go but down.

Share Recommend | Keep | Reply | Mark as Last Read
Previous 10 | Next 10 

Copyright © 1995-2014 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.