SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNew Technology


Next 10 
From: FJB3/4/2010 7:05:00 AM
   of 421
 
EE Times: Semi News

IBM jumps 'last hurdle' to on-chip optical communication
40-Gbps photodetector seen as keystone for enabling promising technology

R. Colin Johnson
(03/03/2010 5:05 PM EST)
URL: eetimes.com

PORTLAND, Ore. —IBM Research claimed a keystone achievement in on-chip optical communications Wednesday (March 3), saying its 40-gigabit-per-second (Gbps) germanium avalanche photodetector completes what it calls the nanophotonic toolkit.
Capping its multi-year effort by surmounting this final technological hurdle, IBM (Yorktown Heights, N.Y.) now claims to have all the pieces to enable chip-to-chip optical communications and ultimately core-to-core optical communications on the same chip. The remaining development effort to integrate its nanophotonic toolkit into its commercial processors will occur over the rest of the decade, IBM said.

"For several years, IBM has been developing a nanophotonics toolkit for creating optical communications between chips consisting of waveguides, modulators, switches and now the last piece of the puzzle, our nanophotonic avalanche photodetector," said IBM Research Scientist Solomon Assefa. "Now we have everything we need to start integrating photonic communications alongside transistors and make this dream a reality."

Over the last few years, IBM has demonstrated silicon modulators for converting electrical signals into light, a silicon delay line for buffering optical signals plus the waveguides and switches necessary to create a complete chip-to-chip optical bus. With the addition of this nanophotonic avalanche photodetector, IBM claims to have its nanoelectronic ducks in-a-row standing poised to obsolete copper wires in favor of optical communications on and among future chips.

Optical microscope image of an array of nanophotonic avalanche photodetectors (top) and the silicon waveguides (bottom) directing light to them.
"We believe that the key to shrinking chips further is leveraging effects at the nanoscale to create both nanophotonic and nanoelectronic devices that work together to make chips cheaper, more power efficient and wider bandwidth by using pulse of light for communications instead of copper wires," Assefa said.

IBM's germanium photodetector multiplies input signals by 10-times using the avalanche effect, yet still achieves 40-Gbps due to its thin film construction, according to IBM. The germainium detector measured just 30 nanometers thick, compared to hundreds of nanometers for competing germanium photodetector designs, enabling its 40 Gbps speed (the avalanche function operates faster for thinner films). Its ultra-thin construction also reduced the normal noise associated with germanium photodetectors by 50 to 70 percent, thereby resurrecting a technology that was once thought to be too noisy for commercialization.

The device also runs off a 1.5 supply voltage, making it perfect for integration on silicon chips compared to traditional avalanche photodetectors which run off 30 volt supplies.

IBM claims that it can fabricate thousands of the photodetectors side-by-side with silicon transistors and integrated silicon waveguides to enable a whole spectrum of on-chip optical communications capabilities for future chips. IBM's long-term plan for integrating silicon photonics with mainstream processors can be viewed on the company's website..

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 7:06:20 AM
   of 421
 
Silicon Integrated Nanophotonics

domino.research.ibm.com

Development of on-chip optical interconnects for future multi-core processors

The ultimate goal of this project is to develop a technology for on-chip integration of ultra-compact nanophotonic circuits for manipulating the light signals, similar to the way electrical signals are manipulated in computer chips. Nanoscale silicon photonics circuits are being developed to enable the integration of complete optical systems on a monolithic semiconductor chip that would eventually allow to overcome severe constraints of today’s mostly copper I/O interconnects.

The current tendency in high performance computing systems is to increase the parallelism in processing at all levels utilizing multithreads, increasing the number of chips in racks and blades, as well as increasing the number of cores on a chip. The scaling of overall system performance that soon might approach Exaflop/s is, however, out of balance with respect to limited available bandwidth for shuttling ExaBytes of data across the system, between the racks, chips and cores.

Optics is destined to be utilized in data centers since optical communications can meet the large bandwidth demands of high-performance computing systems by bringing the immense advantages of high modulation rates and parallelism of wavelength division multiplexing. As it already happened in long-haul communications decades ago when optical fibers replaced copper cables, the copper cables that connect racks in the datacenters are started now to being replaced by optical fibers. Following the same trend optics can become competitive with copper at shorter and shorter distances eventually leading to optical on-board and may be even on-chip communications.



This future 3D-integated chip consists of several layers connected with each other with very dense and small pitch interlayer vias. The lower layer is a processor itself with many hundreds of individual cores. Memory layer (or layers) are bonded on top to provide fast access to local caches. On top of the stack is the Photonic layer with many thousands of individual optical devices (modulators, detectors, switches) as well as analogue electrical circuits (amplifiers, drivers, latches, etc.). The key role of a photonic layer is not only to provide point-to-point broad bandwidth optical link between different cores and/or the off-chip traffic, but also to route this traffic with an array of nanophotonic switches. Hence it is named Intra-chip optical network (ICON).

Silicon photonics offers high density integration of individual optical components on a single chip. Strong light confinement enables dramatic scaling of the device area and allows unprecedented control over optical signals. Silicon nanophotonic devices have immense capacity for low-loss, high-bandwidth data processing. Fabrication of silicon photonics system in the complementary metal–oxide–semiconductor (CMOS)-compatible silicon-on-insulator platform also results in further integration of optical and electrical circuitry. Following the Moore’s scaling laws in electronics, dense chip-scale integration of optical components can bring the price and power per a bit of transferred data low enough to enable optical communications in high performance computing systems.

To meet these stringent requirements and utilize fully all the benefits of optics an innovative engineering is necessary at all levels starting from the design of individual devices to the overall architecture of high-performance computing system. Nanoscale silicon photonics circuits that are being developed within this project are targeted to enable the monolithic integration of complete optical systems on a semiconductor chip.
Read More

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 7:38:18 AM
   of 421
 
Super Talent introduces SuperCrypt USB 3.0 drives with 256GB of space

March 4th, 2010

Super Talent has unveiled a new line of USB 3.0-technology drives, dubbed “SuperCrypt.” Although the origin of the spooky name isn’t clear, these sleek and silvery drives are speedy and secure.

All of the SuperCrypt drives host two AES encryption options, including STT encryption utility for password protection. Then it differs a bit between the standard SuperCrypt collection and the SuperCrypt Pro, incorporating 128-bit ECB encryption and 256-bit XTS encryption, respectively.

SuperCrypt drives will be available in five capacities (16GB, 32GB, 64GB, 128GB and 256GB) and features transfer speeds up to 240MB/s. Super Talent boasts that transferring a 600MB movie file only takes seven seconds with one of their new drives. Wow.

Pricing hasn’t been announced yet, although storage size and security options will definitely factor in. The SuperCrypt drives will ship some time this month.

blogs.zdnet.com

usb.org

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 10:25:15 AM
   of 421
 
Network Maps: USA Longhaul

On this page is a collection of links to maps of US longhaul and intra-regional fiber networks. Maps are links to material on company websites wherever possible. Fiber may be owned or leased via long term IRU. To be on this list, a provider should provide intercity transport services between at least 5 states. I realize 5 is arbitrary, but at the moment it seems to be a reasonable number. Maps for pure metro and smaller regional providers will be collected on another page at some point.

telecomramblings.com

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 4:22:43 PM
   of 421
 
Lotus Dreams: EUVL Continues to Approach Readiness

semiconductor.net

As a follow-up to last week's SPIE Advanced Lithography conference in San Jose, Vivek Bakshi provides an update and his own perspective on the readiness of EUV lithography. He also offers insight into his confidence in ultimately winning a Lotus sports car from litho guru Chris Mack if EUV reaches high-volume production by 2014.

By Vivek Bakshi, President, EUV Litho Inc., Austin, Texas -- Semiconductor International, March 2, 2010

Attendance at this year's SPIE Advanced Lithography conference was up slightly from 2009, but the conference itself seemed a bit lighter. This year's EUV lithography papers were not part of an emerging lithography technologies subconference, but instead made up their own subconference to accommodate the increasing number of EUVL papers, which grew >50% over last year.
On the Sunday before the conference, I taught a one day EUVL short course with my colleagues Patrick Naulleau and Jinho Ahn; our pre conference class not only was the best attended, we had to get extra chairs from next door to accommodate the overflow. Next month I will teach a two-day EUVL short course at ETH Zurich, with student attendance expected in double digits. So I look at all of these events as a sign of continued increasing interest in EUVL.
EUVL is optical lithography
Before reviewing the latest news from the conference, I want to clarify that EUVL is optical lithography. All principals of optical lithography clearly apply to EUVL, except that it is done in a vacuum environment with mirrors instead of lenses. During the EUVL short course, Patrick pointed out that reflection systems have a long track record in traditional optical lithography and that the multilayer mirrors used in EUVL are just quarter-wave stacks, which are widely used in lasers today. To that I would add that many semiconductor processing steps also use vacuum — film deposition, etch and ash, and implantation. So for fab engineers, vacuum-based processing is nothing new.

1. Since the mid-1980s, the wavelength of light used in lithography systems has reduced by almost half from 365 nm to 193 nm. The switch to EUV lithography involves a further wavelength reduction factor of almost 15. (Source: ASML)
I use Prolith software as a virtual scanner to teach EUVL basics, which clearly demonstrates that EUVL is an optical lithography. (I chose this software because it comes with a complimentary license for my students and the university.) One can use the same software for 193 nm lithography as well (just change the wavelength, numerical aperture and resist parameters), and the program uses the same principal to calculate line edge roughness (LER) for both types of lithography. This is possible only because we use photons in both cases for projection lithography (e-beam lithography uses electrons and contact lithography uses physical structures to replicate circuits).
The software allows you to change the wavelength for printing a given feature, and see for yourself the simplicity of using 13.5 nm optical projection lithography to improve imaging quality, as contrasted with the increasingly elaborate techniques for 193 nm. Figure 1 shows how wavelength reduction for optical projection lithography has occurred frequently in the industry, with the switch to EUVL wavelength just a bigger change.
Plenary talks
Plenary talks for Advanced Lithography were given by Kazuo Ushida of Nikon, Sam Sivakumar of Intel, and Eric Chen of Silver Lake Partners. Chen finished his talk by redefining EUV as an acronym for "extremely undervalued." I cannot agree more, and see this as additional evidence of EUV acceptance.
Nikon believes that >0.35 NA optics is needed for a two-generation EUVL tool and sees EUVL being used at the 16 nm node. To extend 193 nm lithography, Ushida proposed a "line cutting" concept along with two sets of patterns on the same mask to make double patterning cost-effective.
Intel's Sivakumar led us through an overview of the role of lithography in making chips, and pointed out the important part that next-generation lithography (NGL) will play in driving the interaction between process and design. Success also will depend on the choice of NGL and its cost-effectiveness.
After the plenary talks, the conference was split into several subconferences. Although these meetings included a large number of sales pitches, there were some good technical talks to be found. In particular, I would like to commend Ahmed Hassanein and his group at Purdue for their fundamental study of tin plasmas for EUV sources, and Stan Stokowski of KLA for his talk on mask inspection technology.
EUVL scanner makers
Nikon is now in a comfortable second place and is focusing most of its efforts on designing an EUVL scanner for high-volume manufacturing (HVM), as it does not have a beta EUVL scanner ready for customers. Instead, Nikon plans to go straight to HVM based on learning from its alpha tool, EUV1 (a somewhat risky approach that may leave ASML with an even larger share of the market).
Nikon believes that its HVM EUVL tool will need 0.4 or higher NA. During Q&A, Nikon's Takaharu Miura pointed out the company is still working to achieve 0.4 NA with six-mirror designs in an effort to avoid an eight-mirror design, which would lose throughput because of additional reflective surfaces.
ASML, the leading EUVL scanner supplier, continues to show good progress with plans to deliver its beta scanner, the NXE:3100, this year. I read a trade press headline saying, "EUV delayed again." I am not sure what this refers to, since ASML announced in 2009 that its NXE:3100 would be delivered in 2010, and appears to be sticking with that plan.
In this year's presentation, ASML clarified EUV source power status, which I found to be very helpful. Traditionally, source power is reported at intermediate focus (IF) and many times it is an estimated power — which has occasionally raised questions about the readiness of source technology. ASML reported that NXE:3100 loses 20% of source power to dose control and will lose an additional 35% to the spectral purity filter (SPF) — thus reducing 200 W of "raw power" (i.e., measured power at IF and the current indicator of power specs for sources) to 104 W of "exposure power" (i.e., power available for printing). In other words, two new loss mechanisms were identified for source power, and it is "exposure power" that defines tool throughput.
ASML is currently conducting acceptance testing at Cymer for a source that provides 20 W of exposure power, which corresponds to 15 wph throughput. ASML expects to get 40 W from Cymer's source this year to improve throughput to 25 wph, and the specs for the NXE:3100 are at 100 W for a 60 wph scanner.
ASML showed data to prove that EUVL offers twice the depth of focus (DOF) and 30% feature size reduction over 193 nm lithography. Transmission of the NXE:3100 has been doubled since alpha demonstration, and an additional 50% increase (or 3× improvement) in throughput over ADT is planned for the HVM tool. ASML's HVM roadmap has 500 W of exposure power, and laser-produced plasma (LPP) EUV sources seem to be the only way to achieve that goal. ASML believes that EUVL is the only cost-effective technology for foundries, and noted that fab floor requirements for EUVL are 2× less than for double patterning.
Silver lining in a tin cloud
In my opinion, one of the most important papers of the conference was Cymer's clear status report on source performance. Cymer clarified that it shipped only two driver lasers for sources to ASML in 2009, and that its first source chamber will be shipped this quarter to ASML, after acceptance testing.
The fact that Cymer's 20 W source will be available shortly made me a believer in the company's technology. Cymer has 40 and 100 W sources planned for 2010, with 1.5× improvement coming from laser power and 3× improvement deriving from conversion efficiency (CE) improvement. I believe there are good prospects that the 40 W source will be achieved this year, and that will be a significant achievement for Cymer and the EUVL community. This means that a beta EUVL scanner can be delivered in 2010, and a source upgrade in 2011 would have a good chance of bringing scanner performance toward 60 wph. This stepwise upgrade of light sources is nothing new — even for discharge-produced plasma (DPP) technology, sources were upgraded in phases to deliver full specified power of the alpha demo tool (ADT).
During the conference, I discussed LPP issues with Hakaru Mizoguchi, CTO of Gigaphoton, the second largest supplier of EUV sources. I have always appreciated the lucid performance information from him and Gigaphoton. Figure 2 shows the schematics of Gigaphoton's LPP-based source design; the company currently has a 14 W prototype.

2. Gigaphoton’s tin-based LPP EUV source uses a magnetic field to control plasma debris. (Source: Gigaphoton)
During our discussion, I became convinced that a combination of in situ cleaning and magnetic field-based control of plasma debris, combined with a gas curtain, has the potential to solve the tin debris problem in LPP-based EUV sources. In LPP, collector mirrors are directly exposed to a cloud of tin consisting of plasma, atomic and macro particle debris, making it difficult to integrate tin-based LPP sources into EUVL scanners. In DPP, a mesh "foil trap" helps protect the collection optics. The LPP collector design offers more light-collection ability due to a larger collection angle and better transmission because of the lack of a foil trap. However, foil trap designs have been proven for DPP and are currently in use; if necessary, they can provide the collector technology for LPP sources. So, based on this latest information and the likelihood of the Cymer source clearing its acceptance test this quarter, the tin LPP debris issue can be declared a difficult challenge and not a showstopper. And since tin debris was the main issue for LPP, high-power EUV sources also can be classed as a difficult challenge instead of a potential showstopper.
EUVL and chipmakers
Samsung, Intel, GlobalFoundries, Hynix, Toshiba and TSMC are leading the development research for EUVL. Samsung wants EUVL ready by 2012, and TSMC announced during the conference that its has ordered an NXE:3100 tool from ASML. If this tool reaches its milestone of 60 wph as planned by 2011, I can see the next improved version being able to produce 100+ wph, which can get memory makers and foundries started with EUVL in the next few years to support printing of critical layers.
In a keynote, Intel talked about extending immersion to the 22 nm node and about combining immersion with double patterning to support patterning at 11 nm. There was a strong correlation between Intel's presentation and the roadmap of Nikon, which has traditionally supplied Intel with scanners. However, Intel is also one of the leading chipmakers working on EUVL development, and has one of the most active EUVL research programs. Intel has invested more than any other chipmaker in EUVL early R&D over the past 10 years, and I think they will choose an NGL that is ready and cost-effective. So I see that the readiness of EUVL is driving the chipmaker's roadmap, and not other way around.
EUVL and mask CoO
In the conference news coverage, I read about complaints from some maskmakers that EUVL masks are expensive and leading adopters of technology have their own mask shops, so maskmakers will have to wait a long time to recover their investments. Masks offer a competitive advantage when everyone is buying the standard tool sets, and products and design are the main differentiators. With EUVL masks, cost is lowered if a single mask is heavily used. With time, I expect that a new business model will emerge to support the industry's needs in the area of EUVL masks, a familiar development for this business. Also, despite some press reports to the contrary, all the cost of ownership (CoO) analyses that I have seen show EUVL to be more cost-effective than double patterning, even with mask costs taken into account (Fig. 3).

3. The latest update of the International Technology Roadmap for Semiconductors shows the relative CoO for the critical layer of a 5000-wafer-run device, indicating that EUVL is more cost-effective than double patterning. (Source: 2009 ITRS)
Mask costs will differ between an in-house shop and an external supplier. In some recent analyses, the cost of double patterning is shown to be approaching that of EUVL, assuming a 200 wph throughput for a double patterning tool and only 100 wph for an EUVL tool.
Actinic mask defect metrology
Carl Zeiss showed its design for an AIMS tool for actinic inspection of EUVL mask defects, which the company expects to be ready in 2013. It is based on an EUV mirror optics design with a throughput of one mask per hour. It will need a 32 W/mm2 sr source, which Zeiss expects to be a DPP-based source. Zeiss is the supplier of EUV optics for ASML, and has good experience in designing and manufacturing EUVL optics systems. DPP source brightness is currently within a factor of 3×, so a target of 32 W/mm2 sr can be achieved (and maybe Zeiss can partner with ASML to speed up tool development). Energetiq is supplying a DPP source (with 10 W/mm2 sr brightness) to Mirai, which also showed the performance of its AIMS tool for actinic inspection.
Lawrence Berkeley National Laboratory (LBNL) has a synchrotron-based tool that has supported actinic inspection research for many years, and researchers there proposed a bridge tool to continue helping the industry in the area of actinic inspection. Source brightness greater than what is currently available will be needed for actinic inspection of patterned masks, and I do not see any EUV source being ready in the near future to support these inspection requirements. HVM sources are not suitable for mask metrology tools because of their cost and size. KLA-Tencor, another company interested in providing an inspection tool, is also looking into actinic inspection and will probably try to further extend its 193 nm technology for EUVL mask inspection while actinic inspection tools are being developed.
In other developments, Nano-UV, a DPP EUV source company, announced a joint venture with Lasertec for an actinic mask defect metrology tool, and newcomer Adlyte entered the EUVL market to provide LPP sources for actinic mask metrology. LPP sources are brighter than DPP sources, and I expect eventual sources for an HVM patterned mask inspection tool to be based on LPP or an alternative EUV source technology.
I heard during the conference that actinic tools for patterned masks inspection are five or more years away. Even by then, I am not sure there will be sources with sufficient brightness to meet the currently specified requirements — so innovation has to come from new optics designs for metrology tools that relax source brightness specs. In any case, standalone AIMS tools are at least two to three years away, and until then mask defectivity inspection will be supported by the improvement of current capabilities of tools from Mirai, LBNL and KLA-Tencor, usage of send-ahead wafers, reduction of mask defectivity, and innovation. Yes, count on innovation — because that is what has kept this industry moving.
It was pointed out in a Q&A session that plenty can still be done today to address mask defectivity — even without commercial actinic inspection tools — as there are copious detectable defects that need to be mitigated. After we see them clearly with actinic inspection, we will still need to mitigate them. Although 2× improvement in mask defectivity has been achieved (Fig. 4), much more must be done to reduce remaining defects. Ideally, most defects that require detection need to come from processing steps, and that is still not the case.

4. Although 2×improvement in mask defectivity has been achieved (Fig. 4), much more must be done to reduce remaining defects. (Source: ASML, with Asahi [1] and U. Okoroannyanwu et al. [2])
Resist progress
Line edge roughness (LER) and resist collapse continue to challenge EUVL resists, as they will any technology used for 22 nm and beyond. ASML showed results for 24 nm L/S with 4.4 nm LER. For the 28 nm node, 81% of combined matrix of resist performance (M-factor) has been achieved. In any case, I was happy to see resist suppliers continuing to increase their engagement.
Device results: You cannot get this with 193 nm

5. EUV patterning of 0.042 µm2 SRAM cells demonstrate a performance advantage over 193 nm lithography. (Source: ASML)
GlobalFoundries' Obert Wood, program leader for the IBM Alliance's EUV program, presented ADT-patterned SRAM chips with an area of 0.079 µm2 and 100% yield. He noted that this yield cannot be obtained today with 193 nm patterning. The researchers have achieved 0.4 nm control of HV bias, demonstrating the effectiveness of rule-based optical proximity correction (OPC) in EUVL. In addition, SRAM cells with areas of 0.042 µm2 have been achieved (Fig. 5), and they clearly demonstrated the performance advantage over 193 nm lithography.
IMEC showed results of its success in modeling and correcting flare, and Toshiba revealed 1 nm variation in critical dimension (CD) over the entire wafer at 22 nm node patterning. The 2009 update of the International Technology Roadmap for Semiconductors (ITRS) indicates that there are no proven optical lithography solutions below the 22 nm node. I think that after seeing the 16 nm patterning results from this conference, the ITRS lithography working group will change its opinion.
Dreaming of a Lotus
Since my review of last spring's Advanced Lithography conference, much has been made of my bet with gentleman scientist and litho guru Chris Mack. (Chris is definitely a litho authority and I shamelessly recommend his book Field Guide to Optical Lithography every chance I get. Even if you are not a lithographer, this book is a very good investment.)
The bet Chris made with me last year — that no abstracts on EUVL will be submitted for the 2011 SPIE Advanced Lithography meeting — was widely discussed at this year's conference, due in part to a presentation that showed a photo of a blue Lotus. I have to admit that it was an unfair bet; since then, many colleagues (some of them not even lithographers) have offered to submit EUVL papers in 2011 if I'll give them a ride in my anticipated new Lotus.
However, we mustn't forget that although conference paper submissions do reflect industry interest in a given technology, even an increasing number of submissions does not guarantee success — the technology can still die a sudden death if it fails to deliver the goods. The semiconductor industry must find cost-effective solutions and will not hesitate to move on if an NGL technology does not perform per Moore's Law. EUVL must continue to show good progress, but in the wake of the Advanced Lithography conference, I am confident this will continue to be the case.
There's another condition to the bet that says EUVL must be in high-volume manufacturing by 2014 for me to collect my prize. I think this will happen by then, and when the dust settles, history will judge EUVL to have been a common-sense extension of optical lithography. Chris will be recognized then as the leading enabler of lithography and its natural extension to EUVL because of his insightful books, Prolith work, years of educating lithographers, and contributions to effective control of LER.
In the meantime, I have temporarily placed my "EUVL" custom license plates on my Suburban as I dream of tooling around Austin with those plates reassigned to my new Lotus.

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 5:08:24 PM
   of 421
 
Mind-reading computers turn heads at CeBIT

Published: 4 Mar 10 16:24 CET
Online: thelocal.de

Devices allowing people to write letters or play pinball using just the power of their brains have become a major draw at Hannover's high-tech CeBIT fair this week.

Huge crowds at the world's biggest technology trade fair gathered round a man sitting at a pinball table, wearing a cap covered in electrodes attached to his head, who controlled the flippers with great proficiency without using hands.

"He thinks: left-hand or right-hand and the electrodes monitor the brain waves associated with that thought, send the information to a computer, which then moves the flippers," said Michael Tangermann, from the Berlin Brain Computer Interface.

But the technology is much more than a fun gadget, it could one day save your life. Scientists are researching ways to monitor motorists' brain waves to improve reaction times in a crash.

In an emergency stop situation, the brain activity kicks in on average around 200 milliseconds before even an alert driver can hit the brake. There is no question of braking automatically for a driver - "we would never take away that kind of control," said Tangermann.

"However, there are various things the car can do in that crucial time, tighten the seat belt, for example," he added.

Using this brain-wave monitoring technology, a car can also tell whether the driver is drowsy or not, potentially warning him or her to take a break.

At the g.tec stall, visitors watched a man with a similar "electrode cap" sat in front of a screen with a large keyboard, with the letters flashing in an ordered sequence.

The user concentrates hard when the chosen letter flashes and the brain waves stimulated at this exact moment are registered by the computer and the letter appears on the screen.

The technology takes a long time at present - it took the man around four minutes to write a five-lettered word - but researchers hope to speed it up in the near future.

Another device allows users to control robots by brain power. The small box has lights flashing at different frequencies at the four points of the compass.

The user concentrates on the corresponding light, depending on whether he wants the robot to move up, down, left or right and the brainwaves generated by viewing that frequency are monitored and the robot is controlled.

The technology is being perfected for physically disabled people, who can communicate and operate other devices using their brain.

"In future, people will be able to control wheelchairs, open doors and turn on their televisions with their minds," said Clemens Holzner from g.tec.

The CeBIT runs until Saturday.

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 6:33:29 PM
   of 421
 
Easy money for hackers, big headaches for IT

By Bill Snyder
Created 2010-03-04 03:00AM

infoworld.com

Batten down the security hatches. Hackers are poisoning social networking sites, particularly Facebook, and loosely regulated app stores like the Google Android marketplace, with increasing ferocity. A new study by security vendor AVG found that poisoned URLs posted on Facebook soared by 200 percent in February (compared to the previous month) after increasing by 300 percent in January. (AVG derived its statistics by analyzing URLs blocked by its software.)

The huge spike in rogue software on Facebook is part of a pattern [1] that security experts have seen for several years: tricking users into poisoning their own systems and networks through clever ruses that appeal to curiosity, greed, or lust. No matter how often management tells users not to goof around while on company networks, they do. And IT gets stuck with the mess.

[ Spear phishing: A new breed of malware [2] dupes even the savviest of users into opening security holes. | Keep up to date on the latest security developments with InfoWorld's Security Central newsletter [3]. ]

Although the numbers in the AVG study focused only on Facebook, Yuval Ben-Itzhak, AVG's senior vice president of engineering, says other social networking sites are also inadvertent carriers of rogue software. Indeed, Facebook appears to take reasonable precautions, he says, which only underlines the difficulty of combating the threat.

An easy $12,000 a day
A favorite trick of hackers these days is the fake antivirus scan, often attached to a Facebook page. All of a sudden a window pops up saying your system may be infected, but we'll do a free scan. In the better -- that is, more malicious -- versions of this scam, it's very difficult to make the pop-up window go away.

And while it might seem, well, stupid to do so, quite a few users will actually pay something for the bogus software. An examination of various Web logs and other sources reveals that even a small gang can net $12,000 a day, according to Ben-Itzhak. "It's a dream come true for the bad guys," he says. In one seven-day period, more than 80,000 users were affected by the rogue scanner malware.

[ InfoWorld Test Center reviews: "Malware-fighting firewalls miss the mark [4]" and "Whitelisting security offers salvation [5]." ]

While the users feel the pain of the antivirus scam, another hack making the rounds targets business information. It's a fake codec. A URL leads a user to a site where a video is posted. To play it, the user needs to download the fake codec, which is actually a container for seriously malicious code designed to steal business information.

That particular scam worked especially well in February, when users were hungry for videos of the Winter Olympics. Similarly, visitors to Foxnews.com who wanted to watch certain video clips last year were tricked into installing a tainted codec. Still, it's difficult to zero in on why Facebook has been hit so much harder this year than last.

To be fair to users, it's worth noting that some of the traditional advice they get from IT or popular publications is no longer adequate [6]. IT tells people to go to only trusted sites. Unfortunately, by the beginning of 2009, the majority of infectious sites were mainstream, says Roger Grimes, a security professional and InfoWorld's Security Adviser blogger [7].

[8]
Facebook says it has not noticed a spike in rogue software. "People have a number of options for controlling the information they share with applications. We also have a dedicated enforcement team that conducts spot reviews of top applications and of many other applications, including looking at the data they need to run the application versus the data they gather," says Facebook spokesman Simon Axten.

Axten points out that apps are subject to privacy settings. "That is, you can configure what your friends' apps can and can't access." (Here's how to configure those settings [9].)

Which is worse: Email or Web 2.0?
AVG isn't the only security company pointing the finger at threats related to Web 2.0 and social networking. Four in five IT professionals polled recently by Webroot [10] said Web 2.0-based malware will pose the biggest security threat this year.

Seventy-three percent said Web-based threats are more difficult to manage than email-based threats, and 23 percent said their company was vulnerable to attacks on Web 2.0 applications, including social networks such as Facebook and Twitter.

No one likes to be hated, but sometimes you have to take security measures that will make your users really angry. You might even have to (gasp) pull some PCs off the Internet and treat some employees like children, suggests David Perry, global director of education for Trend Micro, whose global array of sensors (and information exchanges with other security vendors and customers) now detects an astonishing 100,000 samples of new malware a day.

You know the drill: Tell them going to porn and gambling sites and so on will get them in serious trouble. Because they are adults, you might set up a PC in the break room that has Web access but is not on your network. They may waste time on it, but it won't endanger enterprise security.

I don't mean to pick on Facebook. But I do think that Web 2.0 mavens have to think harder about the problems -- indeed, crimes -- that holes in their sites create for IT.

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 6:52:22 PM
   of 421
 
Microsoft's Ballmer says he has bet the company on the cloud

All Microsoft products driven by idea of being connected to the cloud, CEO tells students

computerworld.com
Nancy Gohring


March 4, 2010 (IDG News Service) Seventy percent of the 40,000 people who work on software at Microsoft are in some way working in the cloud, CEO Steve Ballmer said Thursday at the University of Washington.

"A year from now, that will be 90 percent," he said.

In a wide-ranging talk to computer science students at the university, Ballmer explained why he thinks cloud computing is important and how Microsoft aims to take advantage of the trend toward hosted computing services.

"Our inspiration, our vision ... builds from this cloud base," Ballmer said. "This is the bet, if you will, for our company."

All Microsoft products, including Windows, Office, Xbox, Azure, Bing and Windows Phone, are driven by the idea of being connected to the cloud, he said. While some recently introduced products like Windows 7 included a lot of work that is not cloud-based, the inspiration for the product starts with the cloud, he said.

Beyond software, Ballmer also described Microsoft's different strategies for creating devices that connect to cloud-based services. "The cloud wants smarter devices," he said.

He admitted mistakes in the way that Microsoft historically approached the mobile market, giving hardware makers a wide range of potential for form factors. "We didn't standardize enough. The cacophony of form factors for you, the user, was too high," he said.

Microsoft has unveiled a new version of its mobile software, Windows Phone 7, which has a much stricter set of hardware requirements. Still, it should have more options for hardware makers to innovate than some Microsoft competitors like Apple and Research In Motion where "you get what they choose to build for you," Ballmer said.

In the case of its Xbox gaming console, Microsoft uses that same strategy. But Ballmer hinted that there could be some variety with the Xbox. "You might have more form factors in the future for different price points and options," he said.

Ballmer also said that Microsoft wants to help foster the development of different cloud-computing services, both private and public. "How does the cloud become something that not just Microsoft and four other companies run on the behalf of the whole planet? How do we give the cloud back to you?" he said. "You should be able to, if you want, run your own cloud."

In some cases Microsoft may be eager to help organizations run their own hosted environments because it doesn't make sense for the company to do so itself. For instance, a government might have regulations that hosted data be kept within the country's borders. But in a small country, Microsoft may not be interested in making the investment. "This company is not likely to build a public cloud in Slovenia any time soon," Ballmer said. Instead, Microsoft would like to sell a set of products built around its Azure cloud services that a country like Slovenia can buy and implement itself.

The potential benefits of cloud computing for companies and researchers are immense, Ballmer said. For instance, he talked about how bringing the world's poorest out of poverty will likely mean that those people will consume more energy. "We need to speed up the rate of scientific innovation" that can help solve climate change issues before that happens, he said. Researchers might be better able to run experiments quickly and analyze more data if they are able to access public cloud services, he said.

The cloud "will create opportunities for all the folks in this room to do important research and build important projects," Ballmer said.

The hosted computing model creates new possibilities for businesses too. "I think we are seeing and will continue to see where there are literally new software investments that create new business models, new opportunities to start and form businesses because of this commercial software infrastructure that's never existed before," he said.

For instance, a new company might only have the resources to offer a product to people in its local community. But if it can use hosted computing, it can offer the product to a wider audience, paying for the compute services as it uses them rather than investing in a data center up front.

Ballmer also suggested that the cloud might even make some open-source developers more interested in commercializing their developments. "With the advent of this new commercial infrastructure, some inventors can now ask, how can I monetize this, how can I get an economic value from the innovations that I get a chance to create," he said.

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 6:55:34 PM
   of 421
 
Cisco Claims AT&T Femto as Its Own

MARCH 4, 2010 | Dan Jones

lightreading.com;

Cisco Systems Inc. (Nasdaq: CSCO) has finally posted details showing that it is behind the AT&T Inc. (NYSE: T) MicroCell product, nearly two years after Light Reading Mobile reported that it was working on a femtocell for the operator.

Cisco recently put up a Web page detailing its work with AT&T on MicroCell. The operator has been using the units in customer trials since September 2009, recently adding Las Vegas as a new market. (See AT&T Takes MicroCells to Vegas.)

As Andy Tiller over at 3G in The Home notes, the AT&T-Cisco tie-up must be "the worst kept secret in the femtocell industry." LR Mobile first reported in May 2008 that Cisco was using ip.access Ltd. technology to develop a femtocell for AT&T. (See Cisco, ip.access Prep Femto Combo and Cisco Femto Spotted at AT&T.)

AT&T hasn't confirmed a launch date for the home base stations yet, but a source recently told us that a second-quarter launch is looking possible. The carrier's trials are also likely to extend to Los Angeles and San Francisco soon. (See MWC Rumor: AT&T Eyes Q2 Femto Launch, Enterprise Suppliers.)

Such a timetable could very well make AT&T the first major US operator to launch 3G femtocells. Neither Sprint Nextel Corp. (NYSE: S) nor Verizon Wireless have announced customer trials for the voice and data coverage improvement technology yet.

Share RecommendKeepReplyMark as Last Read


From: FJB3/4/2010 7:28:59 PM
   of 421
 
Microsoft envisions ultra-modular data centers

computerworld.com
Joab Jackson

March 3, 2010 (IDG News Service) In the years to come, Microsoft's data centers may not be huge buildings tightly packed with server racks, but rather rows of small, stand-alone IT units spread across acres and acres of cool, cheap land.

At the DatacenterDynamics conference in New York on Wednesday, Microsoft data center general manager Kevin Timmons outlined some prototype work his unit is doing to design its next generation of data centers, in collaboration with Microsoft Research.

His vision is radically different from most of what the company already has in place.

The company is field-testing something Timmons calls IT PACs, or IT preassembled components, which are small, self-contained units that are assembled off-site and can be linked together to build out an entire data center.

Microsoft, he said, is facing the same challenges as most data center operators. It needs the ability to ramp up capacity in short order, but would like to avoid the massive up-front costs and long lead times required to build out traditional data centers. Given this set of conditions, Microsoft's goal for building its next set of data centers is "ultra-modularity," Timmons said.

Instead of paying US$400 million or more up front to build a data center, Microsoft would prefer to purchase some land, build a sub-station and then populate the acreage with modular units of servers as demand grows.

"We want to view our data centers as more of a traditional manufacturing supply chain, instead of monolithic builds," he said. "It won't all be built on-site in one shot."

By going with this approach, Microsoft can cut the time it takes to ramp up new server capability in half, as well as reduce the costs of building out new data centers, Timmons predicted. "You don't have to commit to a $400 million data center and hope that demand shows up," he said.

Over the past few years Microsoft has been moving toward more modular designs, moving from purchasing individual servers to racks of servers to, most recently, entire containers filled with servers. Microsoft built out its past two data centers, located outside of Chicago and Dublin, using, in part, containers.

The new design takes this modularity concept even further.

The IT PACs are "not really containers in a traditional sense," Timmons said. "They are really integrated air-handling and IT units."

The units themselves could hold anywhere from one to 10,000 servers. The idea is that when the software giant requires more resources, it can have one of these IT PACs shipped to location and "plugged into the spine," which supplies the power and network connectivity to the data center.

Microsoft has built two proof-of-concept models so far. Its next data center, which the company will announce in a few months, will use some form of these IT PACs, Timmons said.

The units will be assembled entirely from commercially available components. A single person should be able to build a unit within four working days. The servers will be stacked in rows, sandwiched between air intake and output vents.

For cooling, ambient air can be sucked in one side, run through the servers and exhausted out the other, with some of the air recirculated to even the overall temperature of the unit. No mechanical cooling units will be used. Networking and power buses will run over the tops of the servers.

The construction materials rely heavily on steel and aluminium, both easily recyclable. The water requirements can be met by a single hose with residential levels of water pressure, he said.

The development team considered different sizes of containers, Timmons said, keeping an eye toward making the units easily shippable. They settled on a size that could contain 1,200 to 2,100 servers and draw between 400 and 600 kilowatts.

The units can be placed inside a large building, or when equipped with outer protective panels, reside out in the open.

One of the chief requirements of IT PACs, he admitted, is that they reside in an area where the ambient temperature is mild enough that it can provide sufficient cooling. Because of their highly portable nature, this should not be a problem, he said.

"If we're doing our job right in site election, square footage will be cheap for me. I want to find a place with lots of room to expand. I don't want to worry about a watts-per-square-foot problem. I'd like to worry about having enough acreage," he said. "We're doing a good job in site selection when we don't have to squeeze in 500 watts per square foot."

Due to their minimal use of mechanical cooling, Timmons estimated that the PUE ratio for its IT PACs would be 1.26 to 1.35, depending on the outside conditions. PUE, or power usage effectiveness, compares overall power supplied to the data center against the amount that actually reaches IT equipment.

A typical data center PUE is around 2.1, according to industry estimates.

If the IT PACs are ultimately pushed into production, Timmons said he hasn't fully decided if Microsoft will build them itself or contract them out. It would probably be a mix of the two, he predicted. "I know how much it costs to build one of these now," he said.

Share RecommendKeepReplyMark as Last Read
Next 10