SI
SI
discoversearch

   Technology StocksThe Singularity, A.I.: Machine & Deep Learning,. and GFA


Previous 10 Next 10 
To: John P who wrote (73)9/27/2017 11:10:22 AM
From: richardred
   of 165
 
Morgan Stanley Calls For 150% Upside In Ambarella

Shanthi Rexaline , Benzinga Staff Writer

September 27, 2017 9:35am Comments






Following its time with Ambarella Inc AMBA 7.08% management on a road show, Morgan Stanley raised the bull case for the company to $115, suggesting 152-percent upside. The optimistic expectation is based on the firm getting increasingly excited for computer vision.

As such, the firm has an Overweight rating for the shares of Ambarella, with the price target at $60.

In pre-market trading, shares of Amabrella were surging up 3.68 percent to $47.28.

Analysts Joseph Moore and Craig Hettenbach noted that the company is upbeat about a significant advancement in the state of the art for dedicated CV chips. If the actual product lives up to this potential, the analysts feel the narrative completely changes.

The analysts think this should pave the way for Ambarella to move to a socket-driven niche market company to a leader in a market that has recently emerged as a strategic priority for every company in semiconductors.

Computer Vision: Emerging Opportunity In Semiconductors CV is processing video images to extract information, with such video analytics key to several of the most important potential growth drivers in semiconductors, especially in driver assistance and eventually autonomous driving.

Morgan Stanley noted that they are also important in several aspects of consumer and industrial IoT, where camera inputs will be used to automate and remotely inform a wide variety of tasks. With software development moving from heuristic development to machine learning, the firm said recent breakthroughs in artificial intelligence "deep learning" will accelerate these technologies.

See also: From ADI To AMD, Morgan Stanley Breaks Down Semiconductors

Current State Of Computer Vision Morgan Stanley noted that most tasks doing CV types of functions are done with programmable chips, typically with some combination of microprocessors of Intel Corporation INTC 0.79%, Advanced Micro Devices, Inc. AMD 1.04%, ARM licensees, graphic processor of AMD and NVIDIA Corporation NVDA 1.46%, digital signal processing chips of Texas Instruments Incorporated TXN 0.34% and Analog Devices, Inc. ADI 0.13% and field programmable chips of Xilinx, Inc. XLNX 0.5% and Altera, a subsidiary of Intel.

The firm pointed out that there are only a few dedicated solutions for CV, notably Movidius, which was acquired by Intel in 2016 and Mobileye, acquired by Intel earlier this year. The firm also noted that Nvidia has introduced tailored driving-based solutions, although the chips are based off more general purpose Tegra X2 chips.

Though the firm said it is too early to pick winners, it sees several years of growth for all of these product categories in the vision area. The firm expects dedicated designs to provide the best performance per watt, but to lack the design flexibility of more general purpose approaches. That said, the firm believes dedicated solutions are likely to create significant opportunity.

Challenges Remain That said, Morgan Stanley referred to the possibility of a long path to CV revenue for Ambarella amid manageable challenges in the core business. Additionally, the firm thinks the stock has quickly discounted the machine vision opportunity.

"Automotive driver assistance/autonomy won't drive revenue for multiple years, but development relationships with Tier one suppliers could be announced in 2018 — it's not our base case, but if that happens, we see substantial upside," the firm said.

At last check, shares of Ambarella were up 4.74 percent at $47.76.

Related Link: Ambarella's Hyper-Seasonality Continues To Be A Concern


Latest Ratings for AMBA




DateFirmActionFromTo
Sep 2017Canaccord GenuityMaintains
Buy
Sep 2017Deutsche BankMaintains
Hold
Sep 2017Craig-HallumDowngradesBuyHold
View More Analyst Ratings for AMBA
View the Latest Analyst Ratings

benzinga.com


Share RecommendKeepReplyMark as Last Read


From: koan9/27/2017 5:23:20 PM
   of 165
 


GOD IS A BOT, AND ANTHONY LEVANDOWSKI IS HIS MESSENGER


https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger?mbid=synd_digg


Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.

When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”...(more @ link)

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: koan who wrote (77)9/27/2017 8:48:12 PM
From: dvdw©
   of 165
 
That POV is right up your alley....cognitive dissonance and all that goes with it, boils down to the biology of inadequate belief .Remember peak oil, that wasteland of obfuscation?

Momentum comes and goes as fashions dictate. (politics in the cesspool, as they are ) you must have new instructions, new terrain is always bought and paid for.

Counting lines of output and input, are governed under the terms of counter programming Popper. RO/RS=CF This equation replaces e=mc2 but you will never get it.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: dvdw© who wrote (78)9/27/2017 9:03:48 PM
From: koan
   of 165
 

You are right I have no idea what that posts says. Neither does anyone else I think :).>

Especially that first paragraph.

AI, the singularity, will be about complex thought, higher order thinking.

What that will turn out to be, who knows, only that we had better learn to think logically and in complex manner.

Ordinary thinking will not cut it any more.

Brave new world.



<<

That POV is right up your alley....cognitive dissonance and all that goes with it boils down to the biology of inadequate belief.

Momentum comes and goes as fashions dictate.

Counting lines of output and input, are governed under the terms of counter programming Popper. RO/RS=CF This equation replaces e=mc2 but you will never get it.

Share RecommendKeepReplyMark as Last Read


To: John P who wrote (74)9/27/2017 9:33:10 PM
From: Glenn Petersen
   of 165
 
The formation of the Vision Fund by Masayoshi Son is either a sign of a market top or an act of genius. Mr. Son has been down this road before.

As the dot.com bubble burst, he reportedly lost $70 billion in one day. He admits that 99% of his net worth was wiped out in 2000

Message 31282818

Share RecommendKeepReplyMark as Last Read


To: koan who wrote (59)9/29/2017 7:59:15 PM
From: Doren
   of 165
 
Thanks for the Kudos...

- I've put Homo Deus on my booklist, which is really for tomorrow when I tire of the internet and start reading again. When movies and music are no longer cheap. I'm currently collecting books, mostly by classic great writers and philosophers. But only on the cheap.

I'm currently watching a 24 hour lecture series on Great Britain from the Tudors through the Stewarts. There is plenty of free video on the web now. You can watch lecture courses that would cost thousands at Harvard et al for free now. European history has been an obsession for some time for me since our world view formed primarily there, after the reformation and the enlightenment.

- Hippies

I didn't blame them necessarily, especially not the real hippies (as opposed to posers.) I was one of the real ones growing up 2 hours from Haight Asbury in Sacramento during the 60s. I just think its ironic that it turned out like it did.

What I meant was we were all obsessed with the future, Star Trek... the naive idea that our ideas would change the world into a near Star Trekian utopia. We failed to a large degree to change anything...

What it was was a modern enlightenment... it changed the world view of many... but not enough.

We didn't see AI and Genetic Engineering coming so soon and merging. So its ironic that many of the San Francisco people/hippies who created the personal computer industry, because they were utopian futurists, created a business that enabled and, I think, inevitably will lead to the end of the natural human race. People like Jobs, Wozniak, the Knoll brothers and Robert Abel who was a friend of mine:

Robert Abel WIKI

It would have been hard for anyone to see that so I don't blame them. Computers have brought us many good things. I write music on my computer, very good music I think.

You can listen free here, I recommend the Unsound Educational CD

But none the less I think we are toast... whether than is the natural evolution scenario after all is anyone's guess at this time.

For myself, I'm 64, just about to semi-retire and hopefully live in the woods doing what I want, as unaffected by coming technology as is possible. I don't think I'll manage to live until immortality comes... which is right after the bionic/AI merge.

Share RecommendKeepReplyMark as Last Read


From: The Ox10/2/2017 2:37:08 PM
   of 165
 
fortune.com

What do you think about the current debate about artificial intelligence? Elon Musk has said it poses an existential threat to humanity.

Technology has always been a double-edged sword, since fire kept us warm but burned down our houses. It’s very clear that overall human life has gotten better, although technology amplifies both our creative and destructive impulses. A lot of people think things are getting worse, partly because that’s actually an evolutionary adaptation: It’s very important for your survival to be sensitive to bad news. A little rustling in the leaves may be a predator, and you better pay attention to that. All of these technologies are a risk. And the powerful ones—biotechnology, nanotechnology, and A.I.—are potentially existential risks. I think if you look at history, though, we’re being helped more than we’re being hurt.

How will artificial intelligence and other technologies impact jobs?


We have already eliminated all jobs several times in human history. How many jobs circa 1900 exist today? If I were a prescient futurist in 1900, I would say, “Okay, 38% of you work on farms; 25% of you work in factories. That’s two-thirds of the population. I predict that by the year 2015, that will be 2% on farms and 9% in factories.” And everybody would go, “Oh, my God, we’re going to be out of work.” I would say, “Well, don’t worry, for every job we eliminate, we’re going to create more jobs at the top of the skill ladder.” And people would say, “What new jobs?” And I’d say, “Well, I don’t know. We haven’t invented them yet.”

That continues to be the case, and it creates a difficult political issue because you can look at people driving cars and trucks, and you can be pretty confident those jobs will go away. And you can’t describe the new jobs, because they’re in industries and concepts that don’t exist yet.


Share RecommendKeepReplyMark as Last ReadRead Replies (3)


To: The Ox who wrote (82)10/2/2017 2:58:45 PM
From: The Ox
   of 165
 

journals.plos.org

Predicting couple therapy outcomes based on speech acoustic features
IntroductionBehavioral Signal Processing (BSP) [ 1, 2] refers to computational methods that support measurement, analysis, and modeling of human behavior and interactions. The main goal is to support decision making of domain experts, such as mental health researchers and clinicians. BSP maps real-world signals to behavioral constructs, often abstract and complex, and has been applied in a variety of clinical domains including couples therapy [ 1, 3, 4], Autism Spectrum Disorder [ 5], and addiction counseling [ 6, 7]. Parallel work with focus on social context rather than the health domains can be found in [ 8, 9]. Notably, couple therapy has been among one of the key application domains of Behavioral Signal Processing. There have been significant efforts in characterizing the behavior of individuals engaged in conversation with their spouses during problem-solving interaction sessions. Researchers have explored information gathered from various modalities such as vocal patterns of speech [ 3, 4, 10, 11], spoken language use [ 1, 12] and visual body gestures [ 13]. These studies are promising towards the creation of automated support systems for psychotherapists in creating objective measures for diagnostics, intervention assessment and planning. This entails not only characterizing and understanding a range of clinically meaningful behavior traits and patterns but, critically, also measure behavior change in response to treatment. A systematic and objective study and monitoring of the outcome relevant to the respective condition can facilitate positive and personalized interventions. In particular, in clinical psychology, predicting (or measuring from couple interactions, without couple, or therapist provided metrics) the outcome of the relationship of a couple undergoing counseling has been a subject of long-standing interest [ 1416].

Many previous studies have manually investigated what behavioral traits and patterns of a couple can tell us of their relationship outcome, for example, whether a couple could successfully recover from their marital conflict or not. Often the monitoring of outcomes involves a prolonged period of time post treatment (up to 5 years), and highly subjective self reporting and manual observational coding [ 17]. Such an approach suffers from the inherent limitations of the qualitative observational assessment, subjective biases of the experts, and great variability in the self-reporting of behavior by the couples. Having a computational framework for outcome prediction can be beneficial towards assessment of the employed therapy strategies and the quality of treatment, and also help provide feedback to the experts.

In this article, we analyze the vocal speech patterns of couples engaged in problem-solving interactions to infer the eventual outcome of their relationship—whether it improves or not–over the course of therapy. The proposed data-driven approach focuses primarily on the acoustics of the interaction; unobtrusively-obtainable, and known to offer rich behavioral information. We adopt well-established speech signal processing techniques, in conjunction with novel data representations inspired by psychological theories to design the computational scheme for the therapy outcome prediction considered. We formulate the outcome prediction as binary (improvement vs. no improvement) and multiclass (different levels of improvement) classification problems and use machine learning techniques to automatically discern the underlying patterns of these classes from the speech signal.

We compare the prediction using features directly derived from speech with prediction using clinically relevant behavioral ratings (e.g., relationship satisfaction, blame patterns, negativity) manually coded by experts after observing the interactions. It should be noted that human behavioral codes are based on watching videos of interactions that provide access to additional information beyond vocal patterns (solely relied by the proposed prediction scheme) including language use and visual nonverbal cues.

In addition to evaluating how well directly signal-derived acoustic features compare with manually derived behavioral codes as features for prediction, we also evaluate the prediction of the outcome when both feature streams are used together.

We also investigate the benefit of explicitly accounting for the dynamics and mutual influence of the dyadic behavior during towards the prediction task. The experimental results show that dynamic functionals that measure relative vocal changes within and across interlocutors contribute to improved outcome prediction.

The outline of the paper is as follows. We discuss relevant literature in Section 1. The Couple Therapy Corpus used in the study is described in Section 1 and illustrated in Fig 1. An overview of the methodologies for speech acoustic feature extraction is given in Section 1 and the use of behavioral codes as features is described in Section 1. We provide an analysis of the proposed acoustic features in Section 1 and the results of the classification experiments in Section 1. Finally, we conclude the paper with a discussion of our findings as well as possible directions for future research in Section 1.


Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: The Ox who wrote (82)10/3/2017 8:00:51 AM
From: w0z
   of 165
 
“Well, don’t worry, for every job we eliminate, we’re going to create more jobs at the top of the skill ladder.”


What happens if a large part of today's workforce is at an ~8th grade level of education?

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: The Ox who wrote (83)10/3/2017 11:24:11 AM
From: The Ox
   of 165
 
Walmart spending on AI:

Message 31288137

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.