SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   PastimesAll Things Technology - Media and Know HOW


Previous 10 Next 10 
From: Don Green1/2/2025 8:38:53 AM
   of 1926
 

Share RecommendKeepReplyMark as Last Read


From: Don Green1/18/2025 8:55:36 AM
1 Recommendation   of 1926
 
ChatGPT Isn't Responsible For the LA Wildfires, But It Isn't HelpingAn email written by ChatGPT uses 17 ounces of water, while the data centers powering AI chatbots require huge amounts of it for cooling.

PCMag editors select and review products . If you buy through affiliate links, we may earn commissions, which help support our .

By Chandra Steele

January 14, 2025

(Credit: Patrick T. Fallon / Getty Images)
With Los Angeles still facing the threat of wildfires, social media has been flooded with posts blaming them on ChatGPT. While this accusation is inaccurate, it’s not unfounded, and the devastation in California should give anyone who uses generative AI pause.

The ferocity of the wildfires is a result of climate change, as is the water scarcity that resulted in dry hydrants and compounded the damage. Both of these things are potentially being made worse by AI, and the more people use it, the more rapidly it will cause harm.

The True Cost of AIEverything we do online requires energy that's largely supplied by environmentally unfriendly fossil fuels. According to a Goldman Sachs report on data center power, a Google search consumes 0.3 watt-hours of electricity, while one ChatGPT search consumes 2.9 watt-hours.

Meanwhile, the data centers emitting all of this carbon dioxide into the air require a huge amount of water to keep the machines inside cool. Shaolei Ren, an associate professor of electrical and computer engineering at UC Riverside, found that one email written by ChatGPT uses 17 ounces of water. Now apply that to all the mundane and ridiculous reasons why people use ChatGPT and other AI chatbots on a daily basis.

Last year, Microsoft and Google reported a huge spike in emissions. In 2023, Microsoft's emissions went up 29%, and it used 23% more water, primarily due to "new technologies, including generative AI." Google said its greenhouse gas emissions surged 48% in the past five years thanks to the expansion of data centers that power its AI tools.

I am not usually one to assign responsibility for climate change to individuals, but the magnitude of literal power that we wield with AI and the consequences of our actions are too great to ignore.

Resisting AIAI is being built into every aspect of our lives, from phones and computers to TVs and even fridges. Microsoft envisions AI agents taking on responsibilities across the workplace.

Share RecommendKeepReplyMark as Last Read


From: Don Green1/18/2025 8:58:05 AM
   of 1926
 
I Love Modern VR, But I Have To Admit That It's Dead On Arrival
22 hours ago
Even VR diehards have to admit that this technology isn't going to take off any time soon.
I've been closely following the world of virtual reality since the Oculus Rift's pitch first took the world by storm, and ended up raising well over two million dollars on Kickstarter in 2012. Plenty of hardware and software has shipped since then, but more than ten years later, it still feels like we're in the infancy of VR. Unfortunately, we might have to wait a decade or two longer to see it mature into something beyond a truly niche interest.

Having used everything from the lowly Google Cardboard to the reasonably powerful Meta Quest 3 (released in 2023,) my affection for VR has never once stumbled and I've never been more sure that fetch isn't going to happen any time soon.

Very few people have the space

Despite having a house of my own — more space than the vast majority of city dwellers — I can't find a really good place to give VR the room it deserves.

If nobody else is home, I get to do some room-scale gaming, where I can walk around, but I end up with a very narrow corridor due to furniture that can't be easily moved. If my wife is home, I run into her by accident, so I retreat to my office with even less space.

And, even when I use a stationary mode for games like "Beat Saber," I end up regularly shifting out of bounds or nearly smacking my hand into the wall.

Living spaces simply aren't designed to be VR friendly, and you look like a maniac if you go outside with a helmet on. That isn't going to change anytime soon.

You can't just slap VR modes on everything

In a year where "Suicide Squad" was poorly received on consoles and PC, a brand new "Batman: Arkham" game released to much warmer reviewson the Meta Quest. That means the VR "Batman" is a better experience, right?

Well, I played "Arkham Shadow" myself, and while it's a solid game in certain aspects, I was left feeling cold towards AAA virtual reality. It feels like a less-precise interpretation of a fairly stale formula, and the VR-specific mechanics ended up more off-putting than exciting.

Slowly moving my arms up and down to climb a ladder doesn't make me enjoy the game more. Having to make broad sweeping motions to pop out my cape for a glide just makes traversal more fiddly. It's frustrating in a way that pressing a button or moving a stick with traditional video games is not.

It's clear that you have to design games around the limits and abilities of VR, and that means huge swaths of games are no-gos. Porting existing games isn't easy, and even reusing basic concepts or level designs can be problematic or at least sub-optimal.

That's not all, sadly. Unless you have industry leaders like Meta subsidizing development, dedicating enough developer resources to make top-tier experiences isn't profitable for most indie folks. "Batman" had Meta money, and even that turned out disappointing. It's no secret that other VR-friendly companies are having a difficult time making ends meet.

The usability problem

Wearing a helmet sucks. While straps and setups can help with the weight and pressure issues, and they will get better over time, there is no getting around the discomfort of having a robot strapped to your head.

Lenses fog, your face will itch, and you're going to get the VR sweats if you have it on for more than 45 minutes. And, if you wear glasses, you're either going to deal with inevitable slippage, or spend even more money for prescription lenses.

Some significant portion of the population will just straight-up vomit if they put on a VR helmet. Hopefully things get better with time, rapidly, but certain aspects are not something that can be engineered away without a trace.

Companies like Meta and Apple are seemingly convinced that people want to spend their days working and socializing with headsets on, and that has proven to be untrue. It doesn't matter how big a virtual screen can be, it's not a better experience than just looking at a monitor in the real world.

The Metaverse? It's a bust. Zuckerberg dumped tens of millions of dollarsinto it to end up with bupkis. Fetch. Isn't. Happening.

VR's strength is also its weakness

By far, the most compelling part of any VR experience is the much ballyhooed " presence" that effectively tricks your brain into believing that you're somewhere else.

I've yet to feel "immersed" in any video game played on television, but five minutes in "Vacation Simulator," and I'm transported far, far away from my living room. That's wonderful if I have absolutely nothing to do, and nobody else is near me — but that simply doesn't happen very often.

Inevitably, a pet wants attention, my wife has something to say or my inbox will ding. Apple has tried their best to solve the issue with video pass through and creepy eye projection, but there's no replacement for taking off the stupid helmet to deal with the real world.

I love being digitally transported, but it just doesn't fit into my life very well. If a VR diehard like me can barely overcome that barrier, imagine how hard it will be to convince skeptics.

[Image: Meta]

Share RecommendKeepReplyMark as Last Read


From: Don Green1/18/2025 10:52:09 PM
   of 1926
 
Reducing Data Center Peak Cooling Demand and Energy Costs With Underground Thermal
Energy Storage

As US Data Centers Continue To Grow, Integrating Geothermal UTES Cooling Could Change the Game

The demand for data centers is projected to increase each year to meet the needs of AI, big data analytics, and cloud services. Photo by Dennis Schroeder, NREL
As the demand for U.S. data centers grows with the expansion of artificial intelligence, cloud services, and big data analytics, so do the energy loads these centers require.

By some estimates, data center energy demands are projected to consume as much as 9% of US annual electricity generation by the year 2030. As much as 40% of data center total annual energy consumption is related to the cooling systems, which can also use a great deal of water. The peak demand of data centers on the hottest hours of the year are a much higher percentage and represent a large cost for the U.S. electric grid.

A new project led by the National Renewable Energy Laboratory (NREL) and funded by the U.S. Department of Energy’s (DOE's) Geothermal Technologies Office aims to address these cooling-system challenges by incorporating geothermal underground thermal energy storage (UTES) technology for data centers.

Data centers typically cool computing equipment by blowing cold air over the components using a water-cooled fan coil or by directly cooling the computing equipment with cool water. Geothermal electricity generation is one option to serve these continuous cooling and computing power requirements. However, emerging geothermal technologies like those that will be explored as part of the new Cold Underground Thermal Energy Storage (Cold UTES) project offer a unique opportunity to reduce data center cooling loads while building more resilient infrastructure that creates a stable source of cooling—in turn reducing the need to build power plants to serve data center cooling loads.

“The approach we're taking is to look into the technical and economic viability of the proposed Cold UTES technologies by projecting what data center loads will look like over the next 30 years,” said Guangdong Zhu, a senior researcher in NREL’s Center for Energy Conversion and Storage Systems and principal investigator for the Cold UTES project. “We’ll then do some projections and grid-scale analysis to show what this technology could look like if it's commercially deployed at a large number of data centers. We’re aiming to improve grid resilience and reduce the cost of required grid expansion.”

By using off-peak power to create a cold energy reserve underground, Cold UTES can be incorporated into existing data center cooling technologies and used during grid peak load hours. This charge/discharge cycling allows the technology to be optimized based on time-of-use and other key grid parameters, similar to a conventional battery charge/discharge cycling, thereby reducing the overall operating cost of the grid. The key difference is that Cold UTES can not only do the same diurnal storage as a conventional grid battery, but it can also achieve long-duration energy storage at seasonal time scales.

“Our expectation is that a Cold UTES system can provide a long-duration energy storage and industrial-scale cooling solution that is commercially attractive and technically viable for data centers,” said Jeff Winick, technology manager at DOE’s Geothermal Technologies Office. “This project will confirm the potential of these systems to provide significant savings and value to data center operators, utilities, and grid system operators.”

This schematic illustrates a data center cooling system using Cold UTES. Image by Dominique Barnes, NREL
NREL is leading the project’s system analysis and grid impact work. Zhu is also joined by partners at Lawrence Berkeley National Laboratory, Princeton University, and the University of Chicago to illustrate how Cold UTES is commercially attractive and technically viable for large data center cooling loads.

“The idea of Cold UTES is super exciting because it's a novel player in the space of data center energy management and cooling,” said Andrew Chien, a professor of computer science at the University of Chicago. “I can't think of another technology focused on storing cold with new opportunities to make data centers more efficient.”

Ultimately, the project hopes to reduce strain on the grid from data centers, reduce the energy cost to data centers, and reduce the cost of data center cooling systems. The ability of Cold UTES to efficiently deliver seasonal storage could also help reduce seasonal curtailments of wind and solar generating facilities. Cold UTES promises to provide costs for the fast-growing data center market, improve grid resiliency during extreme weather events, and help reduce costs and improve reliability for all grid customers.

“This project will help accelerate the development, commercialization, and use of next-generation geothermal energy storage technologies,” Winick said, “thereby establishing American global leadership in energy storage.”

Share RecommendKeepReplyMark as Last Read


From: S. maltophilia1/30/2025 6:10:44 PM
   of 1926
 
DeepSeek's chatbot achieves 17% accuracy, trails Western rivals in NewsGuard audit

reuters.com

Share RecommendKeepReplyMark as Last Read


From: S. maltophilia1/31/2025 1:04:49 PM
   of 1926
 
Now in College, Luddite Teens Still Don’t Want Your Likes
Three years after starting a club meant to fight social media’s grip on young people, many original members are holding firm and gaining new converts.

nytimes.com

Share RecommendKeepReplyMark as Last Read


From: Don Green2/7/2025 3:58:22 PM
   of 1926
 
‘Godfather of AI’ sounds alarm on Google weapons plan

The tech giant’s Nobel Prize-winning former executive says it is putting profits above safety



The “godfather of AI” who pioneered Google’s work in artificial intelligence (AI) has accused the company of putting profits over safety after it dropped a commitment to not using the technology in weapons.

Geoffrey Hinton, the British computer scientist who last year won the Nobel Prize in Physics for his work in AI, said the tech giant’s decision to backtrack on its previous pledge was a “sad example” of companies ignoring concerns about AI.

He said: “It is another sad example of how companies behave when there is a conflict between safety and profits.”

This week, Google removed a longstanding pledge not to use AI to develop weapons capable of harming people from a list of its company principles.

The company said that free countries needed to use the technology for national security purposes, citing an “increasingly complex geopolitical landscape”.

Mr Hinton’s comments are the sharpest criticism he has levelled at Google since he quit the company two years ago over fears the technology could not be controlled.

In 2012, he and two students at the University of Toronto developed the neural network technology that has become the foundation for how modern AI systems are built.

He joined Google the following year after the tech giant acquired his start-up and helped advance the company’s work in AI, leading to developments that have paved the way for chatbots such as ChatGPT and Google’s Gemini.

He left in 2023 saying he wanted to be free to criticise it and other companies when they made reckless decisions about AI.

Mr Hinton said at the time that part of him regretted his life’s work and he was worried about the “existential risk of what happens when these things get more intelligent than us”.

Announcing the decision on Tuesday, James Manyika, the senior vice-president at Google-Alphabet, and Sir Demis Hassabis, the chief executive of the Google DeepMind AI lab, wrote: “We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights.”

Stuart Russell, the British computer scientist, said the decision by Sir Demis was “upsetting” and “distressing”. Much of Google’s AI technology has been developed in the UK in its DeepMind lab.



He said there was a risk that companies could ultimately create “very cheap weapons of mass destruction that are easy to proliferate” using AI that “do not need human supervision”. “Why is Google contributing to this?” he asked.

He said that unlike developing an AI superintelligence, an AI weapon could be “dumb”. He said: “You don’t have to be that smart to kill people”. He said it would be quite possible to develop software that targets certain groups of people – and to give an AI control over that decision-making.

Meanwhile, Google staff members called on their colleagues to push back on the about-turn. A Google source, who works with the No Tech for Apartheid campaign, said: “It is disturbing to catch Google in this bait and switch, considering tens of thousands of workers would have never chosen to work for a military contractor.”

They added it was “a betrayal of Google workers’ trust” and intended to please the Trump administration.

The source said: “This decision by executives to double down on the militarisation of the company will no doubt lead to more worker organising. As Google workers, it is our moral and ethical responsibility to the world to resist this.”

Share RecommendKeepReplyMark as Last Read


From: Don Green2/9/2025 2:42:53 PM
1 Recommendation   of 1926
 
Yesterday I received this email from Microsoft about a price change coming for the "Office 365" and I decided to investigate it and here is what I discovered

The issue is I wanted to buy a renewal before the price change but was told I couldn't even though the email says the price change 14 Feb. But I discovered a way around this price increase


Share RecommendKeepReplyMark as Last Read


From: S. maltophilia2/13/2025 5:11:17 PM
1 Recommendation   of 1926
 
When the BBC tested four generative AI tools on articles on its own site, it found many “significant issues” and factual errors, the company said in a report released Tuesday.

The BBC gave four AI assistants — OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity — “access to our website for the duration of the research 1 and asked them questions about the news, prompting them to use BBC News articles as sources where possible. AAI answers were reviewed by BBC journalists, all experts in the question topics, on criteria including accuracy, impartiality, and how they represented BBC content,” Pete Archer, the BBC’s program director for Generative AI, wrote.

The AI assistants’ answers contained “significant inaccuracies and distorted.....

niemanlab.org

Share RecommendKeepReplyMark as Last Read


From: Don Green2/16/2025 9:58:22 AM
   of 1926
 
This New Algorithm for Sorting Books or Files Is Close to Perfection. The library sorting problem is used across computer science for organizing far more than just books. A new solution is less than a page-width away from the theoretical ideal.
Steve NadisFeb 16, 2025 7:00 AM

Video: Kristina Armitage/Quanta Magazine

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED

The original version of this story appeared in Quanta Magazine.

Computer scientists often deal with abstract problems that are hard to comprehend, but an exciting new algorithm matters to anyone who owns books and at least one shelf. The algorithm addresses something called the library sorting problem (more formally, the “list labeling” problem). The challenge is to devise a strategy for organizing books in some kind of sorted order—alphabetically, for instance—that minimizes how long it takes to place a new book on the shelf.

Imagine, for example, that you keep your books clumped together, leaving empty space on the far right of the shelf. Then, if you add a book by Isabel Allende to your collection, you might have to move every book on the shelf to make room for it. That would be a time-consuming operation. And if you then get a book by Douglas Adams, you’ll have to do it all over again. A better arrangement would leave unoccupied spaces distributed throughout the shelf—but how, exactly, should they be distributed?

This problem was introduced in a 1981 paper, and it goes beyond simply providing librarians with organizational guidance. That’s because the problem also applies to the arrangement of files on hard drives and in databases, where the items to be arranged could number in the billions. An inefficient system means significant wait times and major computational expense. Researchers have invented some efficient methods for storing items, but they’ve long wanted to determine the best possible way.

Last year, in a study that was presented at the Foundations of Computer Science conference in Chicago, a team of seven researchers described a way to organize items that comes tantalizingly close to the theoretical ideal. The new approach combines a little knowledge of the bookshelf’s past contents with the surprising power of randomness.

“It’s a very important problem,” said Seth Pettie, a computer scientist at the University of Michigan, because many of the data structures we rely upon today store information sequentially. He called the new work “extremely inspired [and] easily one of my top three favorite papers of the year.”

Narrowing BoundsSo how does one measure a well-sorted bookshelf? A common way is to see how long it takes to insert an individual item. Naturally, that depends on how many items there are in the first place, a value typically denoted by n. In the Isabel Allende example, when all the books have to move to accommodate a new one, the time it takes is proportional to n. The bigger the n, the longer it takes. That makes this an “upper bound” to the problem: It will never take longer than a time proportional to n to add one book to the shelf.

The authors of the 1981 paper that ushered in this problem wanted to know if it was possible to design an algorithm with an average insertion time much less than n. And indeed, they proved that one could do better. They created an algorithm that was guaranteed to achieve an average insertion time proportional to (log n)2. This algorithm had two properties: It was “deterministic,” meaning that its decisions did not depend on any randomness, and it was also “smooth,” meaning that the books must be spread evenly within subsections of the shelf where insertions (or deletions) are made. The authors left open the question of whether the upper bound could be improved even further. For over four decades, no one managed to do so.

However, the intervening years did see improvements to the lower bound. While the upper bound specifies the maximum possible time needed to insert a book, the lower bound gives the fastest possible insertion time. To find a definitive solution to a problem, researchers strive to narrow the gap between the upper and lower bounds, ideally until they coincide. When that happens, the algorithm is deemed optimal—inexorably bounded from above and below, leaving no room for further refinement.

In 2004, a team of researchers found that the best any algorithm could dofor the library sorting problem—in other words, the ultimate lower bound—was log n. This result pertained to the most general version of the problem, applying to any algorithm of any type. Two of the same authors had already secured a result for a more specific version of the problem in 1990, showing that for any smooth algorithm, the lower bound is significantly higher: (log n)2. And in 2012, another team proved the same lower bound, (log n)2, for any deterministic algorithm that does not use randomness at all.

These results showed that for any smooth or deterministic algorithm, you could not achieve an average insertion time better than (log n)2, which was the same as the upper bound established in the 1981 paper. In other words, to improve that upper bound, researchers would need to devise a different kind of algorithm. “If you’re going to do better, you have to be randomized and non-smooth,” said Michael Bender, a computer scientist at Stony Brook University.

Michael Bender went after the library sorting problem using an approach that didn’t necessarily make intuitive sense.

Photograph: Courtesy of Michael Bender
But getting rid of smoothness, which requires items to be spread apart more or less evenly, seemed like a mistake. (Remember the problems that arose from our initial example—the non-smooth configuration where all the books were clumped together on the left-hand side of the shelf.) And it also was not obvious how leaving things to random chance—essentially a coin toss—would help matters. “Intuitively, it wasn’t clear that was a direction that made sense,” Bender said.

Nevertheless, in 2022, Bender and five colleagues decided to try out a randomized, non-smooth algorithm anyway, just to see whether it might offer any advantages.

A Secret HistoryIronically, progress came from another restriction. There are sound privacy or security reasons why you may want to use an algorithm that’s blind to the history of the bookshelf. “If I had 50 Shades of Grey on my bookshelf and took it off,” said William Kuszmaul of Carnegie Mellon University, nobody would be able to tell.

In a 2022 paper, Bender, Kuszmaul, and four coauthors created just such an algorithm—one that was “history independent,” non-smooth, and randomized—which finally reduced the 1981 upper bound, bringing the average insertion time down to (log n)1.5.

Kuszmaul remembers being surprised that a tool normally used to ensure privacy could confer other benefits. “It’s as if you used cryptography to make your algorithm faster,” he said. “Which just seems kind of strange.”

Helen Xu of the Georgia Institute of Technology, who was not part of this research team, was also impressed. She said that the idea of using history independence for reasons other than security may have implications for many other types of problems.

Closing the GapBender, Kuszmaul, and others made an even bigger improvement with last year’s paper. They again broke the record, lowering the upper bound to (log n) times (log log n)3—equivalent to (log n)1.000…1. In other words, they came exceedingly close to the theoretical limit, the ultimate lower bound of log n.

Once again, their approach was non-smooth and randomized, but this time their algorithm relied on a limited degree of history dependence. It looked at past trends to plan for future events, but only up to a point. Suppose, for instance, you’ve been getting a lot of books by authors whose last name starts with N—Nabokov, Neruda, Ng. The algorithm extrapolates from that and assumes more are probably coming, so it’ll leave a little extra space in the N section. But reserving too much space could lead to trouble if a bunch of A-name authors start pouring in. “The way we made it a good thing was by being strategically random about how much history to look at when we make our decisions,” Bender said.

The result built on and transformed their previous work. It “uses randomness in a completely different way than the 2022 paper,” Pettie said.

These papers collectively represent “a significant improvement” on the theory side, said Brian Wheatman, a computer scientist at the University of Chicago. “And on the applied side, I think they have the potential for a big improvement as well.”

Xu agrees. “In the past few years, there’s been interest in using data structures based on list labeling for storing and processing dynamic graphs,” she said. These advances would almost certainly make things faster.

Meanwhile, there’s more for theorists to contemplate. “We know that we can almost do log n,” Bender said, “[but] there’s still this tiny gap”—the diminutive log log n term that stands in the way of a complete solution. “We don’t know if the right thing to do is to lower the upper bound or raise the lower bound.”

Pettie, for one, doesn’t expect the lower bound to change. “Usually in these situations, when you see a gap this close, and one of the bounds looks quite natural and the other looks unnatural, then the natural one is the right answer,” he said. It’s much more likely that any future improvements will affect the upper bound, bringing it all the way down to log n. “But the world’s full of weird surprises.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10