SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksDisruption Innovation


Previous 10 Next 10 
From: Frank Sully9/13/2022 10:15:04 AM
   of 1103
 
The Countdown Begins! Humans Have 30 Years to be Overrun by AI

by Jayanti

September 13, 2022



Super-powered AI

Humans risk being overrun by artificial superintelligence in 30 years

While AI experts don’t concur on many things, they all accord on one thing AI and MLtechnology is going to have gigantic effects on society and business. Google CEO Sundar Pichai comments artificial intelligence is “one of the overriding things humanities is working on,” and is more profound than our development of electricity or fire.

AI is when we provide machines (software and hardware) with human-like abilities. It means we offer machines the ability to mimic human intelligence. Machine learning (ML) means machines are trained to see, hear, speak, move, and make decisions. The dissimilarity between artificial intelligence and traditional technology is that AI has the potential to make predictions and learn on its own. Humans configure AI to achieve a goal. After that, they are taught data so it learns how best to achieve that goal. Once it grasps well enough, we turn artificial intelligence loose on fresh data, which it can then use to achieve goals on its own without any direct instruction from a human. AI carries out all this by making predictions. Artificial intelligence analyzes data, then uses that data to make (nearly) accurate predictions. The benefit of using AI is that it can perform the same tasks as humans but at a much faster rate, with lesser mistakes. There are generally three forms of modern AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). There are many potential benefits of more advanced artificial intelligence and machine learning. The medical field, for example, would widely benefit from having robotic doctors as proficient as humans.

But, as every coin has two faces, there are risks involved in AI and ML. Namely, if an AI program infinitely smarter than us became malevolent, there would be virtually no plan to stop it. While it could provide societal benefits, a malevolent Artificial Super Intelligence (ASI) program keeps the potential to terminate mankind and should not be created or developed. AI researchers and technology executives like Elon Musk openly expressed their deep concern about human extinction caused by machine learning. Some other experts also believe “a machine with human-level intelligence could be generated in the coming 30 years and could represent a threat to life on Earth.”

Present and future threats

According to Dr. Lewis Liu, CEO of an AI-driven company called Eigen Technologies, some artificial have already “gone dark” by this time. “Even the ‘dumb, non-conscious models we have presented may contain ethical issues around inclusion,” Dr. Liu conveyed to The US Sun. “That kind of mischief stuff is already happening today. “Research from Johns Hopkins University thinks that artificial intelligence algorithms tend to show biases that could discriminate against targeting people of color and women while executing their operations. The American Civil Liberties Union also expressed their concern that AI could “deepen racial inequality” as huge selective processes like hiring and housing are automated. “General AI or AI Superintelligence is just going to perform at a much broader scale, larger propagation of these problems,” Dr. Liu showed his concern. The all-out, Terminator-style war of man versus machine does not seem to be an impossibility either. A poll in futurist Nick Bostrom’s book Superintelligence reveals that almost 10% of experts believe a computer with human-level artificial Super Intelligence strives for a life-threatening crisis for humanity. A giant misconception about AI is that it’s restricted to its black box that can just be unplugged if it intends to hurt us. Some experts accept that the threat landscape should be taking sentient artificial intelligence into account because we are not sure when it will come online, or how it will react to humans.

Preventing Judgement Day

Dr. Liu sadly conveyed “it’s going to be a pretty s***y world” if we achieve artificial superintelligence with the existing lax style of technology regulation. He commented that the development of oversight where the data that powers AI models is scoured for partiality. If the data training a model is sourced from the public, then programmers should have to achieve users’ consent to apply it. Regulation in the US is short of emphasizing “a human check on the outputs” but current developments in China have begun to highlight keeping artificial intelligence under human control.

analyticsinsight.net

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/13/2022 12:49:40 PM
   of 1103
 
Truck Platooning Market to Hit USD 13 Billion by 2030: Global Market Insights Inc.

Global Market Insights Inc.

Sep 13, 2022

Major truck platooning market players include AB Volvo, Bendix Commercial Vehicle Systems LLC, Continental AG, Daimler AG, Hino Motors, Ltd., IVECO S.p.A, NVIDIA Corporation, Omnitracs, Peloton Technology, ZF Friedrichshafen AG, and others.

SELBYVILLE, Del., Sept. 13, 2022 /PRNewswire/ -- The truck platooning market is expected to record a valuation of USD 13 billion by 2030, according to the latest research study by Global Market Insights Inc.

A growing demand for advanced automotive solutions to reduce operating costs and fuel consumption will boost the industry trends. Platooning systems utilize autonomous driving support and networking technologies to operate two or more commercial vehicles together as convoys, which allows for maintaining a predefined distance between each other on highways.



Truck Platooning Market

The technology has gained recognition across vehicle manufacturers and transportation service providers that are seeking new cost-effective solutions. With proximity as close as 12 meters apart, platooning services enable effective airflow around the fleet, thereby reducing overall fuel consumption.

Increasing dependence on next-generation technology is a key factor that could restrain truck platooning market growth. Truck platooning heavily relies on intelligent transportation, which consists of IoT-connected apps, smart mobility, and a fully equipped infrastructure of transportation networks. Current road systems in most countries fail to support the construction of potential routes that can accommodate long-haul truck platoons.

The fully autonomous segment in the truck platooning market is poised to witness a more than 35% growth rate through 2030. This advanced platooning technology involves the integration of a wide range of sensors and complex communication systems which together form a fully autonomous convoy of trucks that can effectively communicate and control vehicles. The technology enables smooth operation of larger fleets, whilst ensuring higher fuel and operational efficiency.

The vehicle-to-vehicle (V to V) communication technology segment is anticipated to reach USD 5 billion by 2030. With the growing use of adaptive cruise control systems which help enhance truck speed autonomously, V to V communication technology is likely to gain momentum in the coming years. This form of communication allows sharing of messages between vehicles to deliver information associated with speed and traffic conditions.

In 2021, the truck platooning market from the blind spot warning (BSW) segment surpassed USD 125 million. There is a high demand for BSW systems in customized luxury vehicles and heavy-duty trucks. These systems monitor the areas along the full length of the truck trailer and can detect vehicles in the blind spot and show a warning in the side-view mirror. Industry players are increasingly focusing on technological development and innovation of blind spot detection systems. The technology can be used to capture the rear view of large commercial vehicles to mitigate collisions while reversing.

Latin America truck platooning market is projected to register USD 490 million by 2030. The region has a solid footprint of automotive component manufacturers that are signing long-term partnerships with the original equipment manufacturers (OEMs). There is an imposition of stringent regulations and frameworks with respect to the automotive and freight industries in recent years. The growing focus on enhancing the safety of domestic consumers will benefit truck platooning solution providers in the region.

Key companies operating in the truck platooning market are Daimler AG, Hino Motors, Ltd., IVECO S.p.A, NVIDIA Corporation, Omnitracs, AB Volvo, Bendix Commercial Vehicle Systems LLC, Continental AG, Peloton Technology, Robert Bosch GmbH, Scania, TomTom International BV., TuSimple, and ZF Friedrichshafen AG. These leaders are seeking to add advanced features to enhance product portfolio and drive innovation.

prnewswire.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/13/2022 1:11:31 PM
   of 1103
 
Amazon Is Using ‘Acquisition’ Of Robotic Companies for Autonomous Growth

by Market Trends

September 13, 2022



Amazon is attempting to scale up with initiatives such as salary hikes, mergers and acquisitions, and partnerships with third partiesAmazon, the eCommerce giant and a warehouse company, is not new to acquiring robotic companies. Starting with the acquisition of Kiva Systems in 2012 for $775 million has come a long way with the recent acquisition of iRobots for a whopping $1.7 billion. And recently it acquired Cloostermans, a Belgian Robotics company, it has been using the services since 2019 for eCommerce operations and scaling up its R&D and deployment operations. “We’re thrilled to be joining the Amazon family and extending the impact we can have at a global scale. Amazon has raised the bar for how supply chain technologies can benefit employees and customers, and we’re looking forward to being part of the next chapter of this innovation”, said Frederik Berckmoes-Joos, CEO of Cloostermans, in a statement for a blog post published by Amazon. Around 200 Cloostermans’ employees would join Amazon’s workforce. Without disclosing the financial terms of the deal, it revealed that operations for the acquired Belgium-based company will continue from its base, located in Hamme after the deal concludes.

Prima facie, might seem like a normal business merger and acquisition initiative which probably is for very valid reasons, but reports point out that it has been ramping up robotics acquisitions to meet the demands that arise out of its ever-expanding business operations. A leaked internal memo, reported by Recode, warns of severe staff shortage, which may put Amazon’s service quality, reputation, and growth plans at risk. Since the last decade, it has been expanding its operations, including in Europe, all while ramping up its hiring initiative all over the world. Experts are of opinion that Amazon’s workplace culture which is focused on “customer obsession” – which made Amazon a convenient work model the world has not seen before – is responsible for the surge in expansion and hence the acquisitions.

Cloostermans, going by its antecedents, strikes as a well-established company. Set up in 1884, it was largely a privately run family business held by the last six generations. Given the fact that Amazon is one of Cloostermans’ biggest customers, the deal might have gone through seamlessly. Apparently, Amazon is attempting to scale up its operations using a mixed bag of initiatives such as salary hikes, mergers and acquisitions, and partnerships with third parties, which Cloosterman was part of, providing robots for packaging and moving operations. “We have more than 5,20,000 robotic drive units, and have added over a million jobs, worldwide. We have more than a dozen other types of robotic systems in our facilities around the world, including sort centers and air hubs. From the early days of the Kiva acquisition, our vision was never tied to a binary decision of people or technology. Instead, it was about people and technology working safely and harmoniously together to deliver for our customers. That vision remains today”, asserts Amazon in one of its blog posts. Clearly, Amazon is turning adversity into an opportunity to not only expand its operations but become self-dependent, and it reflects in its statement to TechCrunch, “We have a vision for a future where people work alongside robotics to further improve safety and the workplace experience” when it acquired Canvas Tech in 2019.

analyticsinsight.net

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/13/2022 1:26:20 PM
1 Recommendation   of 1103
 
Google Gave Its Helper Robots AI Language Skills to Better Work With Humans

Edd Gent



People have been dreaming of robot butlers for decades, but one of the biggest barriers has been getting machines to understand our instructions. Google has started to close the gap by marrying the latest language AI with state-of-the-art robots.

Human language is often ambiguous. How we talk about things is highly context-dependent, and it typically requires an innate understanding of how the world works to decipher what we’re talkingabout. So while robots can be trained to carry out actions on our behalf, conveying our intentions to them can be tricky.

If they have any ability to understand language at all, robots are typically designed to respond to short, specific instructions. More opaque directions like “I need something to wash these chips down” are likely to go over their heads, as are complicated multi-step requests like “Can you put this apple back in the fridge and fetch the chocolate?”

In contrast, a new breed of massive language models inspired by Open AI’s groundbreaking GPT-3 are capable of some impressive linguistic feats. By training on enormous amounts of written material scraped from the web, these AI systems are able to generate high-quality prose, power convincing chatbots, and answer complicated questions about text.

Google has attempted to combine the two in a new project aimed at boosting robots’ ability to understand us. By combining its PaLM large language modelwith robots made by Ever yday Robots—a spinoff from Alphabet’s “moonshot factory,” X—they’ve built prototype mechanized butlers that can do a human’s bidding around the house.

The robots, which roll around on wheels and feature a single robotic arm and a sensor-packed head, were first trained to carry out a variety of basic actions by human operators who remotely controlled them through a series of tasks.

Engineers then created new control software that taps into PaLM’s language skills to translate spoken or written commands from a human into the actions required to achieve it. The software takes advantage of an approach called “chain of thought prompting” that Google unveiled earlier this year, which enables models to break down problems into a series of intermediate steps.

It uses this to divide requests into smaller sub-problems that it can solve with its pre-trained suite of actions. For instance, “get me a Coke” might be converted into “go to the kitchen, open the fridge, pick up a Coke, and return to the living room.”

The robots were given 101 instructions by human users and were able to come up with a sensible response 84 percent of the time, and actually pull them off seamlessly 74 percent of the time.

That represented a 14 percent and 13 percent improvement, respectively, when compared to robots using a less powerful language model than PaLM, Google’s head of robotics Vincent Vanhoucke said in a blog post. The robots powered by PaLM also saw a 26 percent boost in their ability to carry out complicated multi-step requests.

This is still very much a work in progress, though, and the robots can still be thrown off by things as simple as a change in lighting or moving objects out of their familiar positions, according to Wired.It’s not clear whether the language comprehension problem is really more pressing than actually getting robots to successfully carry out tasks in the ever-changing real world.

But the researchers hope the benefits could run in the other direction too, by giving large language models a way to interact with the physical world. While it isn’t yet clear how this project could be used to actually retrain these models, it could be one way to start grounding AI’s language skills in the real world.

So whether or not this line of research ever leads to robotic butlers becoming a reality, it seems likely to push the fields of both robotics and AI towards new and powerful capabilities.

Image Credit: Everyday Robots

singularityhub.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/14/2022 1:00:16 AM
   of 1103
 
Concept Designer Ben Mauro Delivers Epic 3D Trailer ‘Huxley’ This Week ‘In the NVIDIA Studio’

Learn how the sci-fi comic was transformed into a gripping trailer, enabled by close artist collaboration and NVIDIA GeForce RTX GPUs.

September 13, 2022 by GERARDO DELGADO

3D artist, concept designer and storyteller Ben Mauro has contributed to some of the world’s biggest entertainment franchises. He’s worked on movies like Elysium, Valerian and Metal Gear Solid, as well as video games such as Halo Infinite and Call of Duty: Black Ops III.

Mauro has met many inspirational artists throughout his storied career, and he collaborated with a few of them to bring Huxley to life. He called the 3D trailer a year’s worth of work, worth every minute spent — following his decade-long process of creating the comic itself.



“Huxley” introduces a vibrant, futuristic world.

In Mauro’s fantastical, fictional world, two post-apocalyptic scavengers stumble upon a forgotten treasure map in the form of an ancient sentient robot, finding themselves amidst a mystery of galactic scale.In designing Huxley the comic, Mauro worked old-school magic with a pad and pencil, sketching characters and environments before importing visuals into Adobe Photoshop. His NVIDIA GeForce RTX 3090 GPU provided fast performance and AI features to speed up his creative workflow.



Early concept art of “Huxley.”

“What has become of me?” it thought.

The artist used Photoshop’s “Artboards” to quickly view reference artwork for inspiration, as well as “Image Size” to preserve critical details — both features accelerated by his GPU. To finish up the comic, Mauro turned to Blender software to create mockups and block out scenes with the intention of later converting back to 3D from 2D.



Camera shots were matched in Blender.

With 3D trailer production in progress, matte painter and environment artist Steve Cormann used Mauro’s Blender models as a convenient starting point, virtually a one-to-one match to the desired 3D outcome.



Advanced modeling in ZBrush.

Cormann, who specializes in Autodesk 3ds Max software, applied advanced modeling techniques in building the scene. 3ds Max has a GPU-accelerated viewport that guarantees fast and interactive 3D modeling. It also lets artists choose their preferred 3D renderer — which in Cormann’s case is Maxon’s Redshift, where combining GPU acceleration and AI-powered OptiX denoising resulted in lightning-fast final-frame rendering.



Applying textures in Adobe Substance 3D Painter.

This proved useful as Cormann exported scenes into Adobe Substance 3D Painter to apply various textures and colors. RTX-accelerated light- and ambient-occlusion features baked and optimized assets within the scenes in mere seconds, giving Cormann the option to experiment with different visual aesthetics quickly and easily.



All of the hero characters were textured from scratch by artist Antonio Esparza and team.

Enter more of Mauro’s collaborators: lead character artist Antonio Esparza and his team, who spent significant time in 3ds Max to refine individual scenes and generate the staggering number of hero characters. This included uniquely texturing each of the characters and props. Esparza said his GeForce RTX 2080 SUPER GPU allowed him to modify characters and export renders dramatically faster than his previous hardware.Esparza joked that before his hardware upgrade, “Most of the last hours of the day, it was me here, you know, like, waiting.” Director Sava Živkovic would say to Esparza, “Turn the lights off Antonio, we don’t want to see that progress bar.”

Meanwhile, Živkovic turned his focus to lighting in 3ds Max. His trusty GeForce RTX 2080 Ti GPU enabled RTX-accelerated AI denoising with Maxon’s Redshift, resulting in photorealistic visuals while remaining highly interactive. This let the director tweak and modify scenes freely and easily.



City scenes were brought to life using Anima, a simple crowd-simulation software with off-the-shelf character assets.

With renders and textures in a good place, rigging and modeling artist Lucas Salmon began building meshes and rigging in 3ds Max to prepare for animation. Motion capture work was then outsourced to the well-regarded Belgrade-based studio, Take One. With 54 Vicon cameras and one of the biggest capture stages in Europe, it’s no surprise the animation quality in Huxley is world class.



Visual effects were added in Adobe After Effects.

Živkovic then deployed Adobe After Effects to composite the piece. Roughly 90% of the visual effects (VFX) were accomplished with built-in tools, stock footage and various plugins. Key 3D VFX such as ship smoke trails were simulated in Blender and then added in comp. The ability to move between multiple apps quickly is a testament to the power of the RTX GPU, Živkovic said.

“I love the RTX 3090 GPU for the extra VRAM, especially for increasingly bigger scenes where I want everything to look really nice and have quality texture sizes,” he said.



Photorealistic details create an immersive experience for the trailer’s viewers.

Satisfied with the trailer, Mauro reflected on artistry. “As creatives, if we don’t see the film, game, or universe we want to experience in our entertainment, we’re in the position to create it with our hard-earned skills. I feel this is our duty as artists and creators to leave behind more imagined worlds than existed before we got there, to inspire the world and the next generation of artists/creators to push things even further than we did.” he said.



Concept designer and storyteller Ben Mauro.Access Mauro’s impressive portfolio on his website.



“Huxley” the movie is in development.

Huxley is an entire world rich in history and intrigue, currently being developed into a feature film and TV series.

Onwards and Upwards

Many of the techniques Mauro deployed can be learned by viewing free Studio Session tutorials on the NVIDIA Studio YouTube channel.



Learn core foundational warm-up exercises to inspire and ignite creative thinking, discover how to design sci-fi objects such as props, and transform 2D sketches into 3D models.

Cheers,
Frank Sully


Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/14/2022 7:10:50 PM
   of 1103
 
10 years later, deep learning ‘revolution’ rages on, say AI pioneers Hinton, LeCun and Li

Sharon Goldman @sharongoldman

September 14, 2022



Artificial intelligence (AI) pioneer Geoffrey Hinton, one of the trailblazers of the deep learning “revolution” that began a decade ago, says that the rapid progress in AI will continue to accelerate.
In an interview before the 10-year anniversary of key neural network research that led to a major AI breakthrough in 2012, Hinton and other leading AI luminaries fired back at some critics who say deep learning has “hit a wall.”

“We’re going to see big advances in robotics — dexterous, agile, more compliant robots that do things more efficiently and gently like we do,” Hinton said.

Other AI pathbreakers, including Yann LeCun, head of AI and chief scientist at Meta and Stanford University professor Fei-Fei Li, agree with Hinton that the results from the groundbreaking 2012 research on the ImageNet database — which was built on previous work to unlock significant advancements in computer vision specifically and deep learning overall — pushed deep learning into the mainstream and have sparked a massive momentum that will be hard to stop.

In an interview with VentureBeat, LeCun said that obstacles are being cleared at an incredible and accelerating speed. “The progress over just the last four or five years has been astonishing,” he added.

And Li, who in 2006 invented ImageNet, a large-scale dataset of human-annotated photos for developing computer vision algorithms, told VentureBeat that the evolution of deep learning since 2012 has been “a phenomenal revolution that I could not have dreamed of.”

Success tends to draw critics, however. And there are strong voices who call out the limitations of deep learning and say its success is extremely narrow in scope. They also maintain the hype that neural nets have created is just that, and is not close to being the fundamental breakthrough that some supporters say it is: that it is the groundwork that will eventually help us get to the anticipated “artificial general intelligence” (AGI), where AI is truly human-like in its reasoning power.

Looking back on a booming AI decade

Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote this past March about deep learning “ hitting a wall” and says that while there has certainly been progress, “we are fairly stuck on common sense knowledge and reasoning about the physical world.”

And Emily Bender, professor of computational linguistics at the University of Washington and a regular critic of what she calls the “ deep learning bubble,” said she doesn’t think that today’s natural language processing (NLP) and computer vision models add up to “substantial steps” toward “what other people mean by AI and AGI.”

Regardless, what the critics can’t take away is that huge progress has already been made in some key applications like computer vision and language that have set thousands of companies off on a scramble to harness the power of deep learning, power that has already yielded impressive results in recommendation engines, translation software, chatbots and much more.

However, there are also serious deep learning debates that can’t be ignored. There are essential issues to be addressed around AI ethics and bias, for example, as well as questions about how AI regulation can protect the public from being discriminated against in areas such as employment, medical care and surveillance.

In 2022, as we look back on a booming AI decade, VentureBeat wanted to know the following: What lessons can we learn from the past decade of deep learning progress? And what does the future hold for this revolutionary technology that’s changing the world, for better or worse?



Geoffrey Hinton

AI pioneers knew a revolution was comingHinton says he always knew the deep learning “revolution” was coming.

“A bunch of us were convinced this had to be the future [of artificial intelligence],” said Hinton, whose 1986 paper popularized the backpropagation algorithm for training multilayer neural networks. “We managed to show that what we had believed all along was correct.”

LeCun, who pioneered the use of backpropagation and convolutional neural networks in 1989, agrees. “I had very little doubt that eventually, techniques similar to the ones we had developed in the 80s and 90s” would be adopted, he said.

What Hinton and LeCun, among others, believed was a contrarian view that deep learning architectures such as multilayered neural networks could be applied to fields such as computer vision, speech recognition, NLP and machine translation to produce results as good or better than those of human experts. Pushing back against critics who often refused to even consider their research, they maintained that algorithmic techniques such as backpropagation and convolutional neural networks were key to jumpstarting AI progress, which had stalled since a series of setbacks in the 1980s and 1990s.

Meanwhile, Li, who is also codirector of the Stanford Institute for Human-Centered AI and former chief scientist of AI and machine learning at Google, had also been confident that her hypothesis — that with the right algorithms, the ImageNet database held the key to advancing computer vision and deep learning research — was correct.

“It was a very out-of-the-box way of thinking about machine learning and a high-risk move,” she said, but “we believed scientifically that our hypothesis was right.”

However, all of these theories, developed over several decades of AI research, didn’t fully prove themselves until the autumn of 2012. That was when a breakthrough occurred that many say sparked a new deep learning revolution.

In October 2012, Alex Krizhevsky and Ilya Sutskever, along with Hinton as their Ph.D. advisor, entered the ImageNet competition, which was founded by Li to evaluate algorithms designed for large-scale object detection and image classification. The trio won with their paper ImageNet Classification with Deep Convolutional Neural Networks, which used the ImageNet database to create a pioneering neural network known as AlexNet. It proved to be far more accurate at classifying different images than anything that had come before.

The paper, which wowed the AI research community, built on earlier breakthroughs and, thanks to the ImageNet dataset and more powerful GPU hardware, directly led to the next decade’s major AI success stories — everything from Google Photos, Google Translate and Uber to Alexa, DALL-E and AlphaFold.

Since then, investment in AI has grown exponentially: The global startup funding of AI grew from $670 million in 2011 to $36 billion U.S. dollars in 2020, and then doubled again to $77 billion in 2021.

The year neural nets went mainstream

After the 2012 ImageNet competition, media outlets quickly picked up on the deep learning trend. A New York Timesarticle the following month, Scientists See Promise in Deep-Learning Programs[subscription required], said: “Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.” What is new, the article continued, “is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just ‘neural nets’ for their resemblance to the neural connections in the brain.”

AlexNet was not alone in making big deep learning news that year: In June 2012, researchers at Google’s X lab built a neural network made up of 16,000 computer processors with one billion connections that, over time, began to identify “cat-like” features until it could recognize cat videos on YouTube with a high degree of accuracy. At the same time, Jeffrey Dean and Andrew Ngwere doing breakthrough work on large-scale image recognition at Google Brain. And at 2012’s IEEE Conference on Computer Vision and Pattern Recognition, researchers Dan Ciregan et al. significantly improved upon the best performance for convolutional neural networks on multiple image databases.

All told, by 2013, “pretty much all the computer vision research had switched to neural nets,” said Hinton, who since then has divided his time between Google Research and the University of Toronto. It was a nearly total AI change of heart from as recently as 2007, he added, when “it wasn’t appropriate to have two papers on deep learning at a conference.”



Fei-Fei Li

A decade of deep learning progressLi said her intimate involvement in the deep learning breakthroughs – she personally announced the ImageNet competition winner at the 2012 conference in Florence, Italy – meant it comes as no surprise that people recognize the importance of that moment.

“[ImageNet] was a vision started back in 2006 that hardly anybody supported,” said Li. But, she added, it “really paid off in such a historical, momentous way.”

Since 2012, the progress in deep learning has been both strikingly fast and impressively deep.

“There are obstacles that are being cleared at an incredible speed,” said LeCun, citing progress in natural language understanding, translation in text generation and image synthesis.

Some areas have even progressed more quickly than expected. For Hinton, that includes using neural networks in machine translation, which saw great strides in 2014. “I thought that would be many more years,” he said. And Li admitted that advances in computer vision — such as DALL-E — “have moved faster than I thought.”

Dismissing deep learning critics

However, not everyone agrees that deep learning progress has been jaw-dropping. In November 2012, Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote an article for the New Yorker [subscription required] in which he said ,“To paraphrase an old parable, Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.”

Today, Marcus says he doesn’t think deep learning has brought AI any closer to the “moon” — the moon being artificial general intelligence, or human-level AI — than it was a decade ago.

“Of course there’s been progress, but in order to get to the moon, you would have to solve causal understanding and natural language understanding and reasoning,” he said. “There’s not been a lot of progress on those things.”

Marcus said he believes that hybrid modelsthat combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning, is the way forward to combat the limits of neural networks.

For their part, both Hinton and LeCun dismiss Marcus’ criticisms.

“[Deep learning] hasn’t hit a wall – if you look at the progress recently, it’s been amazing,” said Hinton, though he has acknowledged in the past that deep learning is limited in the scope of problems it can solve.

There are “no walls being hit,” added LeCun. “I think there are obstacles to clear and solutions to those obstacles that are not entirely known,” he said. “But I don’t see progress slowing down at all … progress is accelerating, if anything.”

Still, Bender isn’t convinced. “To the extent that they’re talking about simply progress towards classifying images according to labels provided in benchmarks like ImageNet, it seems like 2012 had some qualitative breakthroughs,” she told VentureBeat by email. “If they are talking about anything grander than that, it’s all hype.”

Issues of AI bias and ethics loom large

In other ways, Bender also maintains that the field of AI and deep learning has gone too far. “I do think that the ability (compute power + effective algorithms) to process very large datasets into systems that can generate synthetic text and images has led to us getting way out over our skis in several ways,” she said. For example, “we seem to be stuck in a cycle of people ‘discovering’ that models are biased and proposing trying to debias them, despite well-established results that there is no such thing as a fully debiased dataset or model.”

In addition, she said that she would “like to see the field be held to real standards of accountability, both for empirical claims made actually being tested and for product safety – for that to happen, we will need the public at large to understand what is at stake as well as how to see through AI hype claims and we will need effective regulation.”

However, LeCun pointed out that “these are complicated, important questions that people tend to simplify,” and a lot of people “have assumptions of ill intent.” Most companies, he maintained, “actually want to do the right thing.”

In addition, he complained about those not involved in the science and technology and research of AI.

“You have a whole ecosystem of people kind of shooting from the bleachers,” he said, “and basically are just attracting attention.”

Deep learning debates will certainly continue

As fierce as these debates can seem, Li emphasizes that they are what science is all about. “Science is not the truth, science is a journey to seek the truth,” she said. “It’s the journey to discover and to improve — so the debates, the criticisms, the celebration is all part of it.”

Yet, some of the debates and criticism strike her as “a bit contrived,” with extremes on either side, whether it’s saying AI is all wrong or that AGI is around the corner. “I think it’s a relatively popularized version of a deeper, much more subtle, more nuanced, more multidimensional scientific debate,” she said.

Certainly, Li pointed out, there have been disappointments in AI progress over the past decade –- and not always about technology. “I think the most disappointing thing is back in 2014 when, together with my former student, I cofounded AI4ALL and started to bring young women, students of color and students from underserved communities into the world of AI,” she said. “We wanted to see a future that is much more diverse in the AI world.”

While it has only been eight years, she insisted the change is still too slow. “I would love to see faster, deeper changes and I don’t see enough effort in helping the pipeline, especially in the middle and high school age group,” she said. “We have already lost so many talented students.”

“I would say that other people underestimated the complexity of it,” he said, adding that he doesn’t put himself in that category. “I knew it was hard and would take a long time,” he claimed. “I disagree with some people who say that we basically have it all figured out … [that] it’s just a matter of making those models bigger.”

In fact, LeCun recently published a blueprintfor creating “autonomous machine intelligence” that also shows how he thinks current approaches to AI will not get us to human-level AI.

But he also still sees vast potential for the future of deep learning: What he is most personally excited about and actively working on, he says, is getting machines to learn more efficiently — more like animals and humans.

“The big question for me is what is the underlying principle on which animal learning is based — that’s one reason I’ve been advocating for things like self-supervised learning,” he said. “That progress would allow us to build things that we are currently completely out of reach, like intelligent systems that can help us in our daily lives as if they were human assistants, which is something that we’re going to need because we’re all going to wear augmented reality glasses and we’re going to have to interact with them.”

Hinton agrees that there is much more deep learning progress on the way. In addition to advances in robotics, he also believes there will be another breakthrough in the basic computational infrastructure for neural nets, because “currently it’s just digital computing done with accelerators that are very good at doing matrix multipliers.” For backpropagation, he said, analog signals need to be converted to digital.

“I think we will find alternatives to backpropagation that work in analog hardware,” he said. “I’m pretty convinced that in the longer run we’ll have almost all the computation done in analog.”

Li says that what is most important for the future of deep learning is communication and education. “[At Stanford HAI], we actually spend an excessive amount of effort to educate business leaders, government, policymakers, media and reporters and journalists and just society at large, and create symposiums, conferences, workshops, issuing policy briefs, industry briefs,” she said.

With technology that is so new, she added, “I’m personally very concerned that the lack of background knowledge doesn’t help in transmitting a more nuanced and more thoughtful description of what this time is about.”

How 10 years of deep learning will be remembered

For Hinton, the past decade has offered deep learning success “beyond my wildest dreams.”

But, he emphasizes that while deep learning has made huge gains, it should be also remembered as an era of computer hardware advances. “It’s all on the back of the progress in computer hardware,” he said.

Critics like Marcus say that while some progress has been made with deep learning, “I think it might be seen in hindsight as a bit of a misadventure,” he said. “I think people in 2050 will look at the systems from 2022 and be like, yeah, they were brave, but they didn’t really work.”

But Li hopes that the last decade will be remembered as the beginning of a “great digital revolution that is making all humans, not just a few humans, or segments of humans, live and work better.”

As a scientist, she added, “I will never want to think that today’s deep learning is the end of AI exploration.” And societally, she said she wants to see AI as “an incredible technological tool that’s being developed and used in the most human-centered way – it’s imperative that we recognize the profound impact of this tool and we embrace the human-centered framework of thinking and designing and deploying AI.”

After all, she pointed out: “How we’re going to be remembered depends on what we’re doing now.”

venturebeat.com

Share RecommendKeepReplyMark as Last Read


From: Julius Wong9/17/2022 9:04:38 PM
   of 1103
 
Structure-inflating construction tech could give 3D printing a run for its money




Automatic Construction CEO Alex Bell stands atop a prototype building which was constructed using his company's Flexible Factory Formwork system
Automatic Construction

1/3

Automatic Construction CEO Alex Bell stands atop a prototype building which was constructed using his company's Flexible Factory Formwork system
Automatic Construction

2/3

Bell tells us that he ultimately hopes to have the rebar, tension cables and other reinforcing elements preinstalled within the forms
Automatic Construction

3/3

The PVC form stays in place on the finished structure, forming a waterproof and airtight barrier
Automatic Construction

We've heard how 3D-printed concrete buildings can be constructed quickly and easily, but could there be an even faster and simpler method? According to American inventor Alex Bell, there most certainly is – and it involves inflating buildings, then pumping concrete into them.

When we last heard from Bell, he had created a quirky front-wheel-drive bike with under-the-seat steering, known as the Bellcycle.

His new construction technique, called Inflatable Flexible Factory Formwork (IFFF), has been commercialized via his New York City-based startup, Automatic Construction. Here's a quick explanation of how it works …

The process begins with a truck delivering a rolled-up PVC (polyvinyl chloride) fabric "form" to the construction site. That flexible form is not unlike a giant version of a rolled up, deflated camping mattress.

After the form has been laid in place, air pumps are used to inflate its walls and roof. This causes it to pop up, taking on the three-dimensional shape of the finished structure. Next, locally sourced wet concrete is pumped into the walls and roof of the form, displacing the air inside.

Once that concrete has set, the result is a solid concrete building shell. The form is not removed from that shell, since it now serves as a waterproof, airtight, and thus energy-saving barrier. Features such as doors, windows, interior drywall and exterior siding are then added.




The PVC form stays in place on the finished structure, forming a waterproof and airtight barrier
Automatic Construction

In the prototype structures created so far, rebar reinforcements have also been added onsite. However, Bell tells us that he ultimately hopes to have the rebar, tension cables and other reinforcing elements preinstalled within the forms.

But just how quickly do the buildings go up?

"For our 100 square foot [9.3 sq m] and 200 square foot [18.6 sq m] prototypes, the inflation took seven to 10 minutes with air," he said. "Then the concrete pump filled them in 1.5 hours. Including labor, our prototypes only cost $20 per square foot. This is significantly cheaper than anything else."

Bell's team is now selling homes direct to customers in New York's Hudson Valley, with one project currently underway and another two signed. He tells us that his company has also signed one contract with a "large commercial contractor" to deliver a structure, and signed another contract to deliver a box culvert to an infrastructure contractor.

Along with homes, commercial buildings and infrastructure-related projects, other envisioned applications of the IFFF technology include swimming pool foundations, rapid-deploy military structures and perhaps even one day skyscrapers, or structures on Mars for use by astronauts.

Source: Automatic Construction

newatlas.com

Share RecommendKeepReplyMark as Last Read


From: Julius Wong9/24/2022 9:13:26 PM
   of 1103
 
The Wild Plan to Export Sun From the Sahara to the UK




By the time Scotland’s Hunterston B nuclear power station closed in January of this year, its dual reactors had produced enough energy to power 1.8 million British homes for 46 years. It also provided over 500 jobs to people living in one of the country’s most deprived areas. Now, a project borne on the tide of a new era of energy production will take its place.

The new XLCC factory, to be built at Hunterston in 2023, will not generate electricity. Instead, the site’s 900 workers plan to create four high-voltage, direct current (HVDC) electricity cables that will stretch 3,800 km from Britain’s south coast, beneath the sea, to a patch of desert at ??Guelmim Oued Noun in central Morocco. From there, they’ll provide enough energy to power 7 million British homes and 8 percent of the UK’s total electricity requirement with 10.5 gigawatts of Saharan sun and wind by 2030.

Richard Hardy, project director at Xlinks, which developed the proposal, says people were “taken aback” by its scale. “But when you really step back, it almost becomes obvious that so long as you can get the power back, the project makes sense,” he says.

HVDC technology has existed since 1954, when Sweden connected the Island of Gotland to its mainland grid. HVDC cables experience low energy losses of around 2 percent, making them suitable for transporting electricity over long distances, compared to the 30 percent lost by alternating-current (AC) systems, which most energy grids operate on.

Until a few decades ago, HVDC only worked well when supported by strong, consistent energy-generating sources, like nuclear power plants. They also require converter stations the size of football fields to change the electricity back to AC at a cable’s terminus. The cables and current converter stations meant HVDC cost hundreds of millions of pounds. Installation can take decades. Then, in the 90s, a new system that used insulated gate bipolar transistors (IGBTs), or electronic switches, emerged. These allowed operators to mimic the voltage waveform of a strong energy source with that of weak sources, like solar and wind farms. HVDC projects still require enormous budgets, but the IGBTs allow them to use renewable energy sources. Operators were able to connect national grids with remote solar farms, and their popularity boomed.

HVDC systems can solve one of renewable energy’s biggest challenges: consistent supply. Wind farms generate too much energy when the wind blows and too little when it is still. Countries can access energy around the clock by connecting their grids to distant lands with different weather patterns.

The concept of connecting different countries’ grids also presents an economic opportunity. HVDC connectors give people access to the lowest prices. That provides an enormous benefit when regional events, like Russia’s invasion of Ukraine, prompt a rise in energy prices.

That’s one of the reasons the UK, where residential energy prices are now the second highest in Europe, has been among the fastest to adopt HVDC technology. Existing cables connect its grid with Ireland, France, Belgium, the Netherlands, and Norway. A new project to connect with Germany reached its funding target in July. And the Energy Security Bill now passing through parliament will accelerate the creation of HVDC projects by providing them with official licenses.

wired.com

Share RecommendKeepReplyMark as Last Read


To: Frank Sully who wrote (1086)9/27/2022 4:03:33 AM
From: caleean
   of 1103
 
That is really interesting

Share RecommendKeepReplyMark as Last Read


From: Julius Wong9/27/2022 7:57:24 PM
   of 1103
 
Prototype electric airplane takes first flight

apnews.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10