SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksArtificial Intelligence, Robotics, Chat bots - ChatGPT


Previous 10 Next 10 
From: Ron5/18/2024 8:42:26 PM
   of 4707
 
OpenAI’s Long-Term AI Risk Team Has Disbanded

The entire OpenAI team focused on the existential dangers of AI has either resigned or been absorbed into other research groups, WIRED has confirmed.

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will be absorbed into OpenAI’s other research efforts.

Sutskever’s departure made headlines because although he’d helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board.

Hours after Sutskever’s departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team’s other colead, posted on X that he had resigned.

Neither Sutskever nor Leike responded to requests for comment. Sutskever did not offer an explanation for his decision to leave but offered support for OpenAI’s current path in a post on X. “The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership, he wrote.

Leike posted a thread on X on Friday explaining that his decision came from a disagreement over the company’s priorities and how much resources his team was being allocated.

“I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point,” Leike wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

OpenAI declined to comment on the departures of Sutskever or other members of the superalignment team, or the future of its work on long-term AI risks. Research on the risks associated with more powerful models will now be led by John Schulman, who coleads the team responsible for fine-tuning AI models after training.

The superalignment team was not the only team pondering the question of how to keep AI under control, although it was publicly positioned as the main one working on the most far-off version of that problem. The blog post announcing the superalignment team last summer stated: “Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

OpenAI’s charter binds it to safely developing so-called artificial general intelligence, or technology that rivals or exceeds humans, safely and for the benefit of humanity. Sutskever and other leaders there have often spoken about the need to proceed cautiously. But OpenAI has also been early to develop and publicly release experimental AI projects to the public.

OpenAI was once unusual among prominent AI labs for the eagerness with which research leaders like Sutskever talked of creating superhuman AI and of the potential for such technology to turn on humanity. That kind of doomy AI talk became much more widespread last year, after ChatGPT turned OpenAI into the most prominent and closely-watched technology company on the planet. As researchers and policymakers wrestled with the implications of ChatGPT and the prospect of vastly more capable AI, it became less controversial to worry about AI harming humans or humanity as a whole.

The existential angst has since cooled—and AI has yet to make another massive leap—but the need for AI regulation remains a hot topic. And this week OpenAI showcased a new version of ChatGPT that could once again change people’s relationship with the technology in powerful and perhaps problematic new ways.

The departures of Sutskever and Leike come shortly after OpenAI’s latest big reveal—a new “multimodal” AI model called GPT-4o that allows ChatGPT to see the world and converse in a more natural and humanlike way. A livestreamed demonstration showed the new version of ChatGPT mimicking human emotions and even attempting to flirt with users. OpenAI has said it will make the new interface available to paid users within a couple of weeks.

wired.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Ron who wrote (4618)5/19/2024 1:32:51 PM
From: Triffin
1 Recommendation   of 4707
 
AI Risk Team

Some thoughts from the folks that are tasked with building LLMs

lesswrong.com

and this

lesswrong.com

Triff ..

Share RecommendKeepReplyMark as Last Read


From: Frank Sully5/20/2024 6:30:24 PM
1 Recommendation   of 4707
 
WOW! Kraken Robotics +9.8%. No news I'm aware of.

Share RecommendKeepReplyMark as Last Read


From: Ron5/20/2024 8:58:56 PM
   of 4707
 
Scarlett Johansson says lawyers got OpenAI to shut down "Her" voice

axios.com

Share RecommendKeepReplyMark as Last Read


From: Ron5/20/2024 10:42:10 PM
   of 4707
 
Microsoft announces new PCs with AI chips from Qualcomm
  • Microsoft will bring out new Surface PCs that adhere to its Copilot+ standard for running artificial intelligence models.
  • These PCs and others use Arm-based Qualcomm chips to deliver longer battery life, and PCs with AMD and Intel chips will also become available.

cnbc.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Ron who wrote (4622)5/21/2024 12:21:22 PM
From: Jeff Hayden
   of 4707
 
Has Qualcomm shown a diagram/layout for their AI chips?

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Jeff Hayden who wrote (4623)5/21/2024 12:33:53 PM
From: Ron
   of 4707
 
I asked Microsoft's CoPilot that question...
It replied with a lengthy discussion/description of Qualcomm chips, but I didn't see
actual diagrams. Might be worth checking later.

copilot.microsoft.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Ron who wrote (4624)5/21/2024 12:38:37 PM
From: Jeff Hayden
   of 4707
 
I'm just wondering if Qualcomm's chip is similar to the Apple M series with neural processors and unified memory - as well as the CPU/GPU counts.

Share RecommendKeepReplyMark as Last Read


From: Ron5/22/2024 9:15:22 PM
   of 4707
 
Earnings: Nvidia came in strong again
marketwatch.com

Share RecommendKeepReplyMark as Last Read


From: Ron5/23/2024 8:32:18 AM
   of 4707
 
Colorado the First State to Move Ahead With Attempt to Regulate AI’s Role in American Life
dnyuz.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)
Previous 10 Next 10