SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksWestern Digital (WDC)


Previous 10 Next 10 
To: go_gatrz who wrote (11031)7/23/2010 1:54:05 PM
From: Brazilian Investor
   of 11057
 
As for WDC, they are hitting on every cylinder plus a few. They got whacked after the last CC, but that was pure BS just like all the other times. The sharks never rest. That pothole has already been erased by a wide margin. They still have margin gains to come from Komag, the notebook business is surging beyond expectations, and they now have the areal density leadership in both 2.5" & 3.5". The world has been turned upside down, Capt. Jack Swallow!

I couldn't agree more with you. I read your posts on the Yahoo board but the level of the comments there is unbelievably low. I just got into WDC this week by adding positions at both $31.05 and $28.7 and I don't regret it.

WDC's reputation is not stained, unlike Seagate or Hitachi. To prove my point? What is the best rated HDD manufacturer of internal and external hard drives? Western Digital.

So what do I think is coming next? First, I don't think business was or is as bad as many of the ANALysts wanted to make it sound. Also, maybe - just maybe - the elephants are coming to the realization that the storage sector IS HOT and that there is plenty of money to be made. And that is more true in HDD than any other segment. If so, then valuations should rise from their pathetic 8.5x to at least something over 10x. Heaven forbid we should ever see just plain old average tech multiples of 15x.

The world is in an increasing demand for more storage. Even if analysts were to say the future is in solid state media, there is still a lot to happen for them to reach the high storage density and low price point of regular hard drives.

Some analysts are saying that the regular hard drive business is dead due to devices like the Apple iPad coming into play and the usage of cloud computing, which IMO further re-iterates my previous point about high density and low cost.

I think that right now that my predictions, or anyone else's for that matter, is purely speculation based on human psychiatric patterns and not on actual financial statistics. Because if the latter was the case, I wouldn't be seeing WDC hit a 52-week low today.

Share RecommendKeepReplyMark as Last ReadRead Replies (2)


To: Brazilian Investor who wrote (11032)7/23/2010 2:36:28 PM
From: Cheeky Kid
   of 11057
 
Samsung is a big player in the HDD market too. I don't know what the best drive is, they are all prone to failure. Anything with that many moving parts is going to die one day. Over the past 25 years I had 8 hard drives fail on me, all brands.

Hard Drives
reviews.cnet.com

Share RecommendKeepReplyMark as Last Read


To: Brazilian Investor who wrote (11032)7/24/2010 10:47:48 PM
From: Mark O. Halverson
   of 11057
 
Nice to see someone on the board, after two plus years of no posts. Western Digital is incredibly cash rich and doing extremely well at present. Unfortunately, even with record earnings and revenues, the numbers fell short of expectations, so naturally the stock was pummeled. There seems to be a mindset around that solid state will take over from hard disk storage, rendering disk storage obsolete. I don't buy it. There's ample need for both types. Solid state will keep encroaching re server market and for rapid PC boot up. But for video, movies and other mass storage it won't replace the hard disk for a long, long time, if ever. Once all this is realized I think WDC will bounce back strongly. By the way,I'm long on WDC, STX, SNDK and MU.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: d9/27/2010 6:26:21 PM
   of 11057
 
eweek.com

Per the above, and contrary to claims of others, the Apple iPad is NOT cannibalizing sales of laptop PC's. This takes away ONE of the reasons that shares of WDC are being dumped.

d

Share RecommendKeepReplyMark as Last Read


To: Mark O. Halverson who wrote (11034)12/30/2012 7:43:54 PM
From: Gottfried
   of 11057
 
WDC is one of 10 stocks added to the NDX Dec 24.2012 nasdaq.com

Share RecommendKeepReplyMark as Last Read


From: zax9/10/2014 10:43:24 AM
   of 11057
 
Western Digital's HGST subsidiary today announced it's shipping its first 8TB and the world's first 10TB helium-filled hard drive. The 3.5-in, 10TB drive also marks HGST's first foray into the use of singled magnetic recording technology, which Seagate began using last year. Unlike standard perpendicular magnetic recording (PMR), where data tracks rest side by side, SMR overlaps the tracks on a platter like shingles on a roof, thereby allowing a higher areal density. Seagate has said SMR technology will allow it to achieve 20TB drives by 2020. That company has yet to use helium, however. HGST said its use of hermetically-sealed helium drives reduces friction among moving drive components and keeps dust out. Both drives use a 7-platter configuration with a 7200 RPM spindle speed. The company said it plans to discontinue its production of air-only drives by 2017, replacing all data center models with helium drives.

via SlashDot

Share RecommendKeepReplyMark as Last Read


From: FJB11/13/2015 9:41:43 AM
   of 11057
 
ReRAM Gains Steam

semiengineering.com

New memory finds a lucrative niche between other existing memory types as competition grows.

Resistive RAM appears to be gaining traction. Once considered a universal memory candidate—a replacement for DRAM, flash and SRAM—ReRAM is carving out a niche between DRAM and storage-class memory. Now the question is how large that niche ultimately becomes and whether other competing technologies rush into that space.

ReRAM (known alternately as RRAM), is a type of non-volatile memory that began garnering attention in 2009 when startup Unity Semiconductor emerged from stealth mode. Rambus bought Unity in 2012 because it was one of several contenders for the next generation of memory technology, along with ferroelectric RAM (FeRAM) and Magnetoresistive RAM. ReRAM also has been considered a possible replacement for 2D NAND, NOR flash, and other memory types.

Since then, multiple competitors have entered into the ReRAM business, which seems to validate the potential here.

“I have worked in memory all my career, and for years it was looked down upon as boring,” said Gary Bronner, vice president of Rambus Labs. “Today it is leading innovation. It’s very exciting.”

He’s not alone in that view. “It’s a really exciting time to be in the memory business,” said Sylvian Dubois, vice president of strategic marketing at Crossbar.

What makes ReRAM so interesting is the limitation of other memory choices. There is DRAM for rapid access memory; NAND flash, which is three orders of magnitude slower; and there is storage-class memory in between. Storage-class memory, a term first coined by IBM several years ago, could have a huge impact on computation efficiency.

“I believe that the gap could be filled by two or three different types of ReRam, and could see a real reduction in the volume of DRAM being used,” Bronner said. “This would have very significant impact on the industry. Architects have been very clever at taking advantage of developments. They already take advantage of hierarchy of memory on chip and chip to chip.”

There is a decent list of alternative memory types that all rely upon a bi-stable material as the storage element that changes resistance. Rambus is working with a multilayer metal oxide structure that changes resistance by injecting ions into the material.

Crossbar uses silver atoms suspended in an amorphous silicon matrix. Under write voltage, atoms from a top silver layer migrate into the matrix to form bridges of conductive metal filaments. “These filaments are only 3nm in diameter, but create a very large on/off ratio,” Dubois said. The company has published results for a 7nm read cell.

The other option that has been widely publicized involves phase-change materials, which depend on melting a material and then cooling, quickly or slowly, to create either crystalline or amorphous phases. In terms of these materials choices, Bronner observed that “the physics of the phase change material is probably the best understood.”

However, thermal-based solutions have had a rocky history in the semiconductor industry, from Heat Assisted Magnetic Recording (HAMR) to smectic liquid crystal displays. The problem is that heat spreads, so there is crosstalk between neighbors and DC heating of the part that depends on duty cycle.

“Many people have favored other materials over phase change for these reasons,” Bronner said. “However, Intel and Micron have pioneered work in these materials, presumably because the device physics is much better understood.”

In the end heat may limit the phase change device to lower bandwidths and higher power, according to Dubois.

In addition to material choices, there also are architectural choices. Although crossbars have received a lot of publicity, a 1 transistor-1 storage cell similar to DRAM is where many are starting, particularly for embedded memory. It makes integration much simpler and gives the best access times.

“You can add the new material after the conventional device processing is completed,” Bronner said.

Crossbars give higher density and single bit access, but are limited in the size of each block of crossbars. This is a similar problem to the old multiplexed displays, where the cell count depends on the non-linearity of the on-off switch.

The third choice is a 3D stack. One possibility is multilayers of crossbars, but this requires lithography at each layer. More interesting, in Bronner’s view, is “an equivalent to 3D NAND, where one litho step creates a vertical string of storage cells. 3D NAND will allow flash to continue to scale for five to seven years, and then new materials will have an opportunity.”

The penalty for storage in the form of strings is that there is no longer single-bit access, so the memory slows down. Different access times and cost structure could result from single bit, byte, and multi-byte architectures.

Dubois emphasized that crossbars and 3D all require a multiplexed 1 transistor to N cell architecture, which in turn means that each cell must have a non-linear element. Some teams use a diode with each cell, but Crossbar has demonstrated a cell that has its own non-linearity.

Who and what will win?
“The most impressive progress was disclosed by the Intel/Micron partnership over the summer when they described a 128Gbit chip 3D XPoint” memory,” said Bronner. “To even think about building a device of this size, requires a very mature level of process and defect control.”

Intel had a previous false start with phase-change memory. The company made it clear that the technology has shifted, but has not elaborated on that, according to industry sources.

A search of the U.S. Patent and Trade Office patent application database found crossbar memory materials patents applications as recently as 2012 assigned to Micron that focused on metal chalcogenide phase change systems, which suggests that 3D XPoint may well be an improved phase change system.

In its announcement Intel claimed “a crossbar structure which is 1,000 times faster than NAND flash and 10 times the density of DRAM.” The company also showed a patterned wafer, discussed an operational manufacturing plant in Utah, and said it plans to sell product next year.

Panasonic currently sells a tantalum oxide-based ReRam embedded flash replacement for on-chip static memory.

Rambus purchased the ReRam IP of Unity Semiconductor in 2012 for$33M, and has licensed that IPto a number of parties. Unity had raised more than $22M and created 145 patents.

“Rambus is also partnering with licensees, such as Tezzaron, to create embedded flash products,” Bronner said. “The focus of licensing is architecture and materials.”

A patent search suggests those materials are metal oxides.

Elsewhere, in the startup world, Crossbar announced on Sept. 14 that it has completed a $35 million Series D funding round, bringing the total investment so far to $85 million. Crossbar plans to use the funds to continue the commercial ramp of its “game-changing non-volatile (NVM) memory technology.” At IEDM in 2014, the company reported a device architecture that “has been successfully demonstrated in a 4 Mbit integrated 3D stackable passive crossbar array.”

Dubois said Crossbar received wafers from one of its production manufacturing partners. “The new funding will allow us to put products with embedded RAM on the shelves and move Crossbar forward.”

It appears that differences between the various competitors are primarily in their storage material choices that determine power, access time, read/write cycles and cell size. This is a competition that requires deep pockets, and smaller players are relying on being able to use existing fabs to compete with the industry giants who can bankroll a custom fab.

Share RecommendKeepReplyMark as Last Read


From: more1001/29/2016 9:07:14 AM
   of 11057
 
Western Digital (WDC) reported above-consensus non-GAAP earnings per share for its second quarter, while its revenues were about in line.

Share RecommendKeepReplyMark as Last Read


From: FJB3/27/2016 3:49:11 PM
   of 11057
 
Where’s My Petabyte Disk Drive?
Posted on 27 March 2016 by Brian Hayes
Fourteen years ago I noted that disk drives were growing so fast I couldn’t fill them up. Between 1997 and 2002, storage capacity doubled every year, allowing me to replace a 3 gigabyte drive with a new 120 gigabyte model. I wrote:

Extrapolating the steep trend line of the past five years predicts a thousandfold increase in capacity by about 2012; in other words, today’s 120-gigabyte drive becomes a 120-terabyte unit.

Extending that same growth curve into 2016 would allow for another four doublings, putting us on the threshold of the petabyte disk drive (i.e., <span class="MathJax" id="MathJax-Element-1-Frame" tabindex="0" data-mathml="1015" role="presentation" style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline; display: inline; line-height: normal; text-align: left; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; position: relative; background: transparent;">10151015 bytes).

None of that has happened. The biggest drives in the consumer marketplace hold 2, 4, or 6 terabytes. A few 8- and 10-terabyte drives were recently introduced, but they are not yet widely available. In any case, 10 terabytes is only 1 percent of a petabyte. We have fallen way behind the growth curve.

The graph below extends an illustration that appeared in my 2002 article, recording growth in the areal density of disk storage, measured in bits per square inch:



The blue line shows historical data up to 2002 (courtesy of Edward Grochowski of the IBM Almaden Research Center). The bright green line represents what might have been, if the 1997–2002 trend had continued. The orange line shows the real status quo: We are three orders of magnitude short of the optimistic extrapolation. The growth rate has returned to the more sedate levels of the 1970s and 80s.

What caused the recent slowdown? I think it makes more sense to ask what caused the sudden surge in the 1990s and early 2000s, since that’s the kink in the long-term trend. The answers lie in the details of disk technology. More sensitive read heads developed in the 90s allowed information to be extracted reliably from smaller magnetic domains. Then there was a change in the geometry of the domains: the magnetic axis was oriented perpendicular to the surface of the disk rather than parallel to it, allowing more domains to be packed into the same surface area. As far as I know, there have been no comparable innovations since then, although a new writing technology is on the horizon. (It uses a laser to heat the domain, making it easier to change the direction of magnetization.)

As the pace of magnetic disk development slackens, an alternative storage medium is coming on strong. Flash memory, a semiconductor technology, has recently surpassedmagnetic disk in areal density; Micron Technologies reports a laboratory demonstration of 2.7 terabits per square inch. And Samsung has announced a flash-based solid-state drive (SSD) with 15 terabytes of capacity, larger than any mechanical disk drive now on the market. SSDs are still much more expensive than mechanical disks—by a factor of 5 or 10—but they offer higher speed and lower power consumption. They also offer the virtue of total silence, which I find truly golden.

Flash storage has replaced spinning disks in about a quarter of new laptops, as well as in all phones and tablets. It is also increasingly popular in servers (including the machine that hosts bit-player.org). Do disks have a future?



In my sentimental moments, I’ll be sorry to see spinning disks go away. They are such jewel-like marvels of engineering and manufacturing prowess. And they are the last link in a long chain of mechanical contrivances connecting us with the early history of computing—through Turing’s bombe and Babbage’s brass gears all the way back to the Antikythera mechanism two millennia ago. From here on out, I suspect, most computers will have no moving parts.

Maybe in a decade or two the spinning disk will make a comeback, the way vinyl LPs and vacuum tube amplifiers have. “Data that comes off a mechanical disk has a subtle warmth and presence that no solid-state drive can match,” the cogniscenti will tell us.

“You can never be too rich or too thin,” someone said. And a computer can never be too fast. But the demand for data storage is not infinitely elastic. If a file cabinet holds everything in the world you might ever want to keep, with room to spare, there’s not much added utility in having 100 or 1,000 times as much space.

In 2002 I questioned whether ordinary computer users would ever fill a 1-terabyte drive. Specifically, I expressed doubts that my own files would ever reach the million megabyte mark. Several readers reassured me that data will always expand to fill the space available. I could only respond “We’ll see.” Fourteen years later, I now have the terabyte drive of my dreams, and it holds all the words, pictures, music, video, code, and whatnot I’ve accumulated in a lifetime of obsessive digital hoarding. The drive is about half full. Or half empty. So I guess the outcome is still murky. I can probably fill up the rest of that drive, if I live long enough. But I’m not clamoring for more space.

One factor that has surely slowed demand for data storage is the emergence of cloud computing and streaming services for music and movies. I didn’t see that coming back in 2002. If you choose to keep some of your documents on Amazon or Azure, you obviously reduce the need for local storage. Moreover, offloading data and software to the cloud can also reduce the overall demand for storage, and thus the global market for disks or SSDs. A typical movie might take up 3 gigabytes of disk space. If a million people load a copy of the same movie onto their own disks, that’s 3 petabytes. If instead they stream it from Netflix, then in principle a single copy of the file could serve everyone.

In practice, Netflix does not store just one copy of each movie in some giant central archive. They distribute rack-mounted storage units to hundreds of internet exchange points and internet service providers, bringing the data closer to the viewer; this is a strategy for balancing the cost of storage against the cost of communications bandwidth. The current generation of the Netflix Open Connect Appliance has 36 disk drives of 8 terabytes each, plus 6 SSDs that hold 1 terabyte each, for a total capacity of just under 300 terabytes. (Even larger units are coming soon.) In the Netflix distribution network, files are replicated hundreds or thousands of times, but the total demand for storage space is still far smaller than it would be with millions of copies of every movie.

A recent blog post by Eric Brewer, Google’s vice president for infrastructure, points out:

The rise of cloud-based storage means that most (spinning) hard disks will be deployed primarily as part of large storage services housed in data centers. Such services are already the fastest growing market for disks and will be the majority market in the near future. For example, for YouTube alone, users upload over 400 hours of video every minute, which at one gigabyte per hour requires more than one petabyte (1M GB) of new storage every day or about 100x the Library of Congress.

Thus Google will not have any trouble filling up petabyte drives. An accompanying white paper argues that as disks become a data center specialty item, they ought to be redesigned for this environment. There’s no compelling reason to stick with the present physical dimensions of <span class="MathJax" id="MathJax-Element-2-Frame" tabindex="0" data-mathml="212" role="presentation" style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline; display: inline; line-height: normal; text-align: left; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; position: relative; background: transparent;">212212 or <span class="MathJax" id="MathJax-Element-3-Frame" tabindex="0" data-mathml="312" role="presentation" style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline; display: inline; line-height: normal; text-align: left; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; position: relative; background: transparent;">312312 inches. Moreover, data-center disks have different engineering priorities and constraints. Google would like to see disks that maximize both storage capacity and input-output bandwidth, while minimizing cost; reliability of individual drives is less critical because data are distributed redundantly across thousands of disks.

The white paper continues:

An obvious question is why are we talking about spinning disks at all, rather than SSDs, which have higher [input-output operations per second] and are the “future” of storage. The root reason is that the cost per GB remains too high, and more importantly that the growth rates in capacity/$ between disks and SSDs are relatively close . . . , so that cost will not change enough in the coming decade.

If the spinning disk is remodeled to suit the needs and the economics of the data center, perhaps flash storage can become better adapted to the laptop and desktop environment. Most SSDs today are plug-compatible replacements for mechanical disk drives. They have the same physical form, they expect the same electrical connections, and they communicate with the host computer via the same protocols. They pretend to have a spinning disk inside, organized into tracks and sectors. The hardware might be used more efficiently if we were to do away with this charade.

Or maybe we’d be better off with a different charade: Instead of dressing up flash memory chips in the disguise of a disk drive, we could have them emulate random access memory. Why, after all, do we still distinguish between “memory” and “storage” in computer systems? Why do we have to open and save files, launch and shut down applications? Why can’t all of our documents and programs just be everpresent and always at the ready?

In the 1950s the distinction between memory and storage was obvious. Memory was the few kilobytes of magnetic cores wired directly to the CPU; storage was the rack full of magnetic tapes lined up along the wall on the far side of the room. Loading a program or a data file meant finding the right reel, mounting it on a drive, and threading the tape through the reader and onto the take-up reel. In the 1970s and 80s the memory/storage distinction began to blur a little. Disk storage made data and programs instantly available, and virtual memory offered the illusion that files larger than physical memory could be loaded all in one go. But it still wasn’t possible to treat an entire disk as if all the data were all present in memory. The processor’s address space wasn’t large enough. Early Intel chips, for example, used 20-bit addresses, and therefore could not deal with code or data segments larger than <span class="MathJax" id="MathJax-Element-4-Frame" tabindex="0" data-mathml="220˜106" role="presentation" style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline; display: inline; line-height: normal; text-indent: 0px; text-align: left; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; position: relative; background: transparent;">220˜106220˜106 bytes.

We live in a different world now. A 64-bit processor can poentially address <span class="MathJax" id="MathJax-Element-5-Frame" tabindex="0" data-mathml="264" role="presentation" style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline; display: inline; line-height: normal; text-indent: 0px; text-align: left; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; position: relative; background: transparent;">264264 bytes of memory, or 16 exabytes (i.e., 16,000 petabytes). Most existing processor chips are limited to 48-bit addresses, but this still gives direct access to 281 terabytes. Thus it would be technically feasible to map the entire content of even the largest disk drive onto the address space of main memory.

In current practice, reading from or writing to a location in main memory takes a single machine instruction. Say you have a spreadsheet open; the program can get the value of any cell with a load instruction, or change the value with a store instruction. If the spreadsheet file is stored on disk rather than loaded into memory, the process is quite different, involving not single instructions but calls to input-output routines in the operating system. First you have to open the file and read it as a one-dimensional stream of bytes, then parse that stream to recreate the two-dimensional structure of the spreadsheet; only then can you access the cell you care about. Saving the file reverses these steps: The two-dimensional array is serialized to form a linear stream of bytes, then written back to the disk. Some of this overhead is unavoidable, but the complex conversions between serialized files on disk and more versatile data structures in memory could be eliminated. A modern processor could address every byte of data—whether in memory or storage—as if it were all one flat array. Disk storage would no longer be a separate entity but just another level in the memory hierarchy, turning what we now call main memory into a new form of cache. From the user’s point of view, all programs would be running all the time, and all documents would always be open.

Is this notion of merging memory and storage an attractive prospect or a nightmare? I’m not sure. There are some huge potential problems. For safety and sanity we generally want to limit which programs can alter which documents. Those rules are enforced by the file system, and they would have to be re-engineered to work in the memory-mapped environment.

Perhaps more troubling is the cognitive readjustment required by such a change in architecture. Do we really want everything at our fingertips all the time? I find it comforting to think of stored files as static objects, lying dormant on a disk drive, out of harm’s way; open documents, subject to change at any instant, require a higher level of alertness. I’m not sure I’m ready for a more fluid and frenetic world where documents are laid aside but never put away. But I probably said the same thing 30 years when I first confronted a machine capable of running multiple programs at once (anyone remember Multifinder?).

The dichotomy between temporary memory and permanent storage is certainly not something built into the human psyche. I’m reminded of this whenever I help a neophyte computer user. There’s always an incident like this:

“I was writing a letter last night, and this morning I can’t find it. It’s gone.”

“Did you save the file?”

“Save it? From what? It was right there on the screen when I turned the machine off.”

Finally the big questions: Will we ever get our petabyte drives? How long will it take? What sorts of stuff will we keep on them when the day finally comes?

The last time I tried to predict the future of mass storage, extrapolating from recent trends led me far astray. I don’t want to repeat that mistake, but the best I can suggest is a longer-term baseline. Over the past 50 years, the areal density of mass-storage media has increased by seven orders of magnitude, from about <span class="MathJax" id="MathJax-Element-6-Frame" tabindex="0" data-mathml="105" role="presentation" style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline; display: inline; line-height: normal; text-indent: 0px; text-align: left; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; position: relative; background: transparent;">105105 bits per square inch to about <span class="MathJax" id="MathJax-Element-7-Frame" tabindex="0" data-mathml="1012" role="presentation" style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline; display: inline; line-height: normal; text-indent: 0px; text-align: left; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; position: relative; background: transparent;">10121012. That works out to about seven years for a tenfold increase, on average. If that rate is an accurate predictor of future growth, we can expect to go from the present 10 terabytes to 1 petabyte in about 15 years. But I would put big error bars around that number.

I’m even less sure about how those storage units will be used, if in fact they do materialize. In 2002 my skepticism about filling up a terabyte of personal storage was based on the limited bandwidth of the human sensory system. If the documents stored on your disk are ultimately intended for your own consumption, there’s no point in keeping more text than you can possibly read in a lifetime, or more music than you can listen to, or more pictures than you can look at. I’m now willing to concede that a terabyte of information may not be beyond human capacity to absorb. But a petabyte? Surely no one can read a billion books or watch a million hours of movies.

This argument still seems sound to me, in the sense that the conclusion follows if the premise is correct. But I’m no longer so sure about the premise. Just because it’s mycomputer doesn’t mean that all the information stored there has to be meant for my eyes and ears. Maybe the computer wants to collect some data for its own purposes. Maybe it’s studying my habits or learning to recognize my voice. Maybe it’s gathering statistics from the refrigerator and washing machine. Maybe it’s playing go, or gossiping over some secret channel with the Debian machine across the alley.

We’ll see.

Share RecommendKeepReplyMark as Last Read


From: Eric L6/10/2016 10:48:08 AM
   of 11057
 
WDC's SanDisk Corporation acquisition completed (May 12, 2016) ...


wdc.com

IRVINE, Calif. — May 12, 2016 — Western Digital® Corporation (NASDAQ: WDC) today announced that its wholly-owned subsidiary Western Digital Technologies, Inc. has completed the acquisition of SanDisk Corporation (NASDAQ: SNDK). The addition of SanDisk makes Western Digital Corporation a comprehensive storage solutions provider with global reach, and an extensive product and technology platform that includes deep expertise in both rotating magnetic storage and non-volatile memory (NVM).

The Company also indicated that the debt financing associated with this transaction has been consummated and that the previously obtained funds from this financing have been released from escrow to Western Digital Technologies, Inc.

“Today is a significant day in the history of Western Digital,” said Steve Milligan, chief executive officer of Western Digital. “We are delighted to welcome SanDisk into the Western Digital family. This transformational combination creates a media-agnostic leader in storage technology with a robust portfolio of products and solutions that will address a wide range of applications in almost all of the world’s computing and mobile devices. We are excited to now begin focusing on the many opportunities before us, from leading innovation to bringing the best of what we can offer as a combined company to our customers. In addition, we will begin the work to fully realize the value of this combination through executing on our synergies, generating significant cash flow, as well as rapidly deleveraging our balance sheet, and creating significant long-term value for our shareholders.”

The integration process will begin immediately through the joint efforts of teams from both companies. As previously announced, Steve Milligan will continue to serve as chief executive officer of Western Digital, which will remain headquartered in Irvine, California. Sanjay Mehrotra, co-founder, president and chief executive officer of SanDisk, will serve as a member of the Western Digital Board of Directors, effective immediately.

“As a combined company, we will be best positioned to address the demands for data storage, which is growing exponentially every year,” said Sanjay Mehrotra. “Growth and change go hand in hand, and we couldn’t be happier to grow and change together with Western Digital. I look forward to contributing to realizing the potential of this combination as a member of the board.”

Under the terms of the transaction, each outstanding share of SanDisk common stock was converted into the right to receive $67.50 per share in cash and 0.2387 shares of Western Digital common stock.

SanDisk shareholders looking for information with regard to the payment of the merger consideration should review the Public FAQ available in the Investor Relations section of our website at investor.wdc.com or click here.

About Western Digital

Western Digital Corporation (NASDAQ: WDC) is an industry-leading provider of storage technologies and solutions that enable people to create, leverage, experience and preserve data. The company addresses ever-changing market needs by providing a full portfolio of compelling, high-quality storage solutions with customer-focused innovation, high efficiency, flexibility and speed. Our products are marketed under the HGST, SanDisk and WD brands to OEMs, distributors, resellers, cloud infrastructure providers and consumers. For more information, please visit www.hgst.com , www.wd.com , and www.sandisk.com .

# # #



More of the FAQ in re shareholsers here: wdc.com

The October 21, 2015 original announcement is here: wdc.com

Regulatory Approvals (below): wdc.com

May 10, 2016 — Western Digital® Corporation (NASDAQ: WDC) ("Western Digital" or the "Company") today announced that it has received regulatory approval from China's Ministry of Commerce ("MOFCOM") in connection with the planned acquisition by Western Digital Technologies, Inc., a wholly owned subsidiary of Western Digital, of SanDisk Corporation (NASDAQ: SNDK) ("SanDisk"). The MOFCOM decision completes the regulatory review process required for this transaction. Western Digital expects the transaction to close on Thursday, May 12, 2016.

"We are pleased to have received approval from MOFCOM, the final regulatory milestone for our combination with SanDisk," said Steve Milligan, chief executive officer of Western Digital. "We look forward to closing the transaction and to integrating our two global businesses to create the leading storage solutions company."

The transaction has also received regulatory approvals in the U.S., E.U., Singapore, Japan, Taiwan, South Korea, South Africa and Turkey. Western Digital and SanDisk shareholders voted to approve the transaction at their respective special meetings of shareholders held on March 15, 2016.

- Eric L. -

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10