Technology StocksThe Singularity, A.I.: Machine & Deep Learning,. and GFA

Previous 10 Next 10 
To: Doren who wrote (58)9/18/2017 1:28:38 PM
From: koan
   of 229
Hi Doren, that is one of the most interesting posts I have ever read on SI. What makes it so interesting is that on the one hand you put forth ideas regarding the explosion of knowledge and juxtaposition it against probable alien sentient beings, which I agree with: but on the other hand you see a future that I think my be different than you suppose, not that I have any great insights into it.

I am reading Homo Deus right now and I think that is a book you should read. It is about the future and the author seems really smart.

But I would like to discuss all this with you further as I can see you are a good thinker.

But I am puzzled by your comment below. I am one of the last living Haight Ashbury Hippies left alive.I think you may be misunderstanding what the counter culture revolution was about. I think most people never figured that one out.

Tom Brokwa wrote the book boom and said he asked Hillary Clinton if she ever cracked the code. I found that funny that he did not understand what it was about but wrote a book about it.

Here is my take on the 60's. It was one of the most important milestones in the history of humankind as it was the first major existential movement in the history of humankind and it is now being repeated around the world.

So it was inevitable because the thesis of that revolution was nothing more than getting rid of old primitive ideas and replacing them with modern thinking

So I wouldn't blame the hippies. They were simply ahead of their time and right on and implementing

our manifest destiny. They were simply the intellectual vehicles.



Its ironic that the hippies who wanted to create a utopia, are so much responsible for the end of humanity.

Share RecommendKeepReplyMark as Last ReadRead Replies (3)

From: Glenn Petersen9/19/2017 7:17:46 AM
1 Recommendation   of 229
Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains

New technologies are testing the limits of computer semiconductors. To deal with that researchers have gone looking for ideas from nature.

New York Times
SEPT. 16, 2017

After years of stagnation, the computer is evolving again, prompting some of the world’s largest tech companies to turn to biology for insights. Credit Minh Uong/The New York Times

SAN FRANCISCO — We expect a lot from our computers these days. They should talk to us, recognize everything from faces to flowers, and maybe soon do the driving. All this artificial intelligence requires an enormous amount of computing power, stretching the limits of even the most modern machines.

Now, some of the world’s largest tech companies are taking a cue from biology as they respond to these growing demands. They are rethinking the very nature of computers and are building machines that look more like the human brain, where a central brain stem oversees the nervous system and offloads particular tasks — like hearing and seeing — to the surrounding cortex.

After years of stagnation, the computer is evolving again, and this behind-the-scenes migration to a new kind of machine will have broad and lasting implications. It will allow work on artificially intelligent systems to accelerate, so the dream of machines that can navigate the physical world by themselves can one day come true.

This migration could also diminish the power of Intel, the longtime giant of chip design and manufacturing, and fundamentally remake the $335 billion a year semiconductor industry that sits at the heart of all things tech, from the data centers that drive the internet to your iPhone to the virtual reality headsets and flying drones of tomorrow.

“This is an enormous change,” said John Hennessy, the former Stanford University president who wrote an authoritative book on computer design in the mid-1990s and is now a member of the board at Alphabet, Google’s parent company. “The existing approach is out of steam, and people are trying to re-architect the system.”

Xuedong Huang, left, and Doug Burger of Microsoft are among the employees leading the company’s efforts to develop specialized chips. Credit Ian C. Bates for The New York Times

The existing approach has had a pretty nice run. For about half a century, computer makers have built systems around a single, do-it-all chip — the central processing unit — from a company like Intel, one of the world’s biggest semiconductor makers. That’s what you’ll find in the middle of your own laptop computer or smartphone.

Now, computer engineers are fashioning more complex systems. Rather than funneling all tasks through one beefy chip made by Intel, newer machines are dividing work into tiny pieces and spreading them among vast farms of simpler, specialized chips that consume less power.

Changes inside Google’s giant data centers are a harbinger of what is to come for the rest of the industry. Inside most of Google’s servers, there is still a central processor. But enormous banks of custom-built chips work alongside them, running the computer algorithms that drive speech recognition and other forms of artificial intelligence.

Google reached this point out of necessity. For years, the company had operated the world’s largest computer network — an empire of data centers and cables that stretched from California to Finland to Singapore. But for one Google researcher, it was much too small.

In 2011, Jeff Dean, one of the company’s most celebrated engineers, led a research team that explored the idea of neural networks — essentially computer algorithms that can learn tasks on their own. They could be useful for a number of things, like recognizing the words spoken into smartphones or the faces in a photograph.

In a matter of months, Mr. Dean and his team built a service that could recognize spoken words far more accurately than Google’s existing service. But there was a catch: If the world’s more than one billion phones that operated on Google’s Android software used the new service just three minutes a day, Mr. Dean realized, Google would have to double its data center capacity in order to support it.

“We need another Google,” Mr. Dean told Urs Hölzle, the Swiss-born computer scientist who oversaw the company’s data center empire, according to someone who attended the meeting. So Mr. Dean proposed an alternative: Google could build its own computer chip just for running this kind of artificial intelligence.

But what began inside data centers is starting to shift other parts of the tech landscape. Over the next few years, companies like Google, Apple and Samsung will build phones with specialized A.I. chips. Microsoft is designing such a chip specifically for an augmented-reality headset. And everyone from Google to Toyota is building autonomous cars that will need similar chips.

This trend toward specialty chips and a new computer architecture could lead to a “Cambrian explosion” of artificial intelligence, said Gill Pratt, who was a program manager at Darpa, a research arm of the United States Department of Defense, and now works on driverless cars at Toyota. As he sees it, machines that spread computations across vast numbers of tiny, low-power chips can operate more like the human brain, which efficiently uses the energy at its disposal.

“In the brain, energy efficiency is the key,” he said during a recent interview at Toyota’s new research center in Silicon Valley.

Change on the Horizon

There are many kinds of silicon chips. There are chips that store information. There are chips that perform basic tasks in toys and televisions. And there are chips that run various processes for computers, from the supercomputers used to create models for global warming to personal computers, internet servers and smartphones.

An older board and chip combination at Microsoft’s offices. Chips now being developed by the company can be reprogrammed for new tasks on the fly. Credit Ian C. Bates for The New York Times

For years, the central processing units, or C.P.U.s, that ran PCs and similar devices were where the money was. And there had not been much need for change.

In accordance with Moore’s Law, the oft-quoted maxim from Intel co-founder Gordon Moore, the number of transistors on a computer chip had doubled every two years or so, and that provided steadily improved performance for decades. As performance improved, chips consumed about the same amount of power, according to another, lesser-known law of chip design called Dennard scaling, named for the longtime IBM researcher Robert Dennard.

By 2010, however, doubling the number of transistors was taking much longer than Moore’s Law predicted. Dennard’s scaling maxim had also been upended as chip designers ran into the limits of the physical materials they used to build processors. The result: If a company wanted more computing power, it could not just upgrade its processors. It needed more computers, more space and more electricity.

Researchers in industry and academia were working to extend Moore’s Law, exploring entirely new chip materials and design techniques. But Doug Burger, a researcher at Microsoft, had another idea: Rather than rely on the steady evolution of the central processor, as the industry had been doing since the 1960s, why not move some of the load onto specialized chips?

During his Christmas vacation in 2010, Mr. Burger, working with a few other chip researchers inside Microsoft, began exploring new hardware that could accelerate the performance of Bing, the company’s internet search engine.

At the time, Microsoft was just beginning to improve Bing using machine-learning algorithms (neural networks are a type of machine learning) that could improve search results by analyzing the way people used the service. Though these algorithms were less demanding than the neural networks that would later remake the internet, existing chips had trouble keeping up.

Mr. Burger and his team explored several options but eventually settled on something called Field Programmable Gate Arrays, or F.P.G.A.s.: chips that could be reprogrammed for new jobs on the fly. Microsoft builds software, like Windows, that runs on an Intel C.P.U. But such software cannot reprogram the chip, since it is hard-wired to perform only certain tasks.

With an F.P.G.A., Microsoft could change the way the chip works. It could program the chip to be really good at executing particular machine learning algorithms. Then, it could reprogram the chip to be really good at running logic that sends the millions and millions of data packets across its computer network. It was the same chip but it behaved in a different way.

Microsoft started to install the chips en masse in 2015. Now, just about every new server loaded into a Microsoft data center includes one of these programmable chips. They help choose the results when you search Bing, and they help Azure, Microsoft’s cloud-computing service, shuttle information across its network of underlying machines.

Teaching Computers to Listen

In fall 2016, another team of Microsoft researchers — mirroring the work done by Jeff Dean at Google — built a neural network that could, by one measure at least, recognize spoken words more accurately than the average human could.

Xuedong Huang, a speech-recognition specialist who was born in China, led the effort, and shortly after the team published a paper describing its work, he had dinner in the hills above Palo Alto, Calif., with his old friend Jen-Hsun Huang, (no relation), the chief executive of the chipmaker Nvidia. The men had reason to celebrate, and they toasted with a bottle of champagne.

Jeff Dean, one of Google’s most celebrated engineers, said the company should develop a chip for running a type of artificial intelligence; right, Google’s Tensor Processing Unit, or T.P.U. Credit Ryan Young for The New York Times
Xuedong Huang and his fellow Microsoft researchers had trained their speech-recognition service using large numbers of specialty chips supplied by Nvidia, rather than relying heavily on ordinary Intel chips. Their breakthrough would not have been possible had they not made that change.

“We closed the gap with humans in about a year,” Microsoft’s Mr. Huang said. “If we didn’t have the weapon — the infrastructure — it would have taken at least five years.”

Because systems that rely on neural networks can learn largely on their own, they can evolve more quickly than traditional services. They are not as reliant on engineers writing endless lines of code that explain how they should behave.

But there is a wrinkle: Training neural networks this way requires extensive trial and error. To create one that is able to recognize words as well as a human can, researchers must train it repeatedly, tweaking the algorithms and improving the training data over and over. At any given time, this process unfolds over hundreds of algorithms. That requires enormous computing power, and if companies like Microsoft use standard-issue chips to do it, the process takes far too long because the chips cannot handle the load and too much electrical power is consumed.

So, the leading internet companies are now training their neural networks with help from another type of chip called a graphics processing unit, or G.P.U. These low-power chips — usually made by Nvidia — were originally designed to render images for games and other software, and they worked hand-in-hand with the chip — usually made by Intel — at the center of a computer. G.P.U.s can process the math required by neural networks far more efficiently than C.P.U.s.

Nvidia is thriving as a result, and it is now selling large numbers of G.P.U.s to the internet giants of the United States and the biggest online companies around the world, in China most notably. The company’s quarterly revenue from data center sales tripled to $409 million over the past year.

“This is a little like being right there at the beginning of the internet,” Jen-Hsun Huang said in a recent interview. In other words, the tech landscape is changing rapidly, and Nvidia is at the heart of that change.

Creating Specialized Chips

G.P.U.s are the primary vehicles that companies use to teach their neural networks a particular task, but that is only part of the process. Once a neural network is trained for a task, it must perform it, and that requires a different kind of computing power.

After training a speech-recognition algorithm, for example, Microsoft offers it up as an online service, and it actually starts identifying commands that people speak into their smartphones. G.P.U.s are not quite as efficient during this stage of the process. So, many companies are now building chips specifically to do what the other chips have learned.

Google built its own specialty chip, a Tensor Processing Unit, or T.P.U. Nvidia is building a similar chip. And Microsoft has reprogrammed specialized chips from Altera, which was acquired by Intel, so that it too can run neural networks more easily.

Other companies are following suit. Qualcomm, which specializes in chips for smartphones, and a number of start-ups are also working on A.I. chips, hoping to grab their piece of the rapidly expanding market. The tech research firm IDC predicts that revenue from servers equipped with alternative chips will reach $6.8 billion by 2021, about 10 percent of the overall server market.

Bart Sano, the vice president of engineering who leads hardware and software development for Google’s network, acknowledged that specialty chips were still a relatively modest part of the company’s operation. Credit Ryan Young for The New York Times

Across Microsoft’s global network of machines, Mr. Burger pointed out, alternative chips are still a relatively modest part of the operation. And Bart Sano, the vice president of engineering who leads hardware and software development for Google’s network, said much the same about the chips deployed at its data centers.

Mike Mayberry, who leads Intel Labs, played down the shift toward alternative processors, perhaps because Intel controls more than 90 percent of the data-center market, making it by far the largest seller of traditional chips. He said that if central processors were modified the right way, they could handle new tasks without added help.

But this new breed of silicon is spreading rapidly, and Intel is increasingly a company in conflict with itself. It is in some ways denying that the market is changing, but nonetheless shifting its business to keep up with the change.

Two years ago, Intel spent $16.7 billion to acquire Altera, which builds the programmable chips that Microsoft uses. It was Intel’s largest acquisition ever. Last year, the company paid a reported $408 million buying Nervana, a company that was exploring a chip just for executing neural networks. Now, led by the Nervana team, Intel is developing a dedicated chip for training and executing neural networks.

“They have the traditional big-company problem,” said Bill Coughran, a partner at the Silicon Valley venture capital firm Sequoia Capital who spent nearly a decade helping to oversee Google’s online infrastructure, referring to Intel. “They need to figure out how to move into the new and growing areas without damaging their traditional business.”

Intel’s internal conflict is most apparent when company officials discuss the decline of Moore’s Law. During a recent interview with The New York Times, Naveen Rao, the Nervana founder and now an Intel executive, said Intel could squeeze “a few more years” out of Moore’s Law. Officially, the company’s position is that improvements in traditional chips will continue well into the next decade.

Mr. Mayberry of Intel also argued that the use of additional chips was not new. In the past, he said, computer makers used separate chips for tasks like processing audio.

But now the scope of the trend is significantly larger. And it is changing the market in new ways. Intel is competing not only with chipmakers like Nvidia and Qualcomm, but also with companies like Google and Microsoft.

Google is designing the second generation of its T.P.U. chips. Later this year, the company said, any business or developer that is a customer of its cloud-computing service will be able to use the new chips to run its software.

While this shift is happening mostly inside the massive data centers that underpin the internet, it is probably a matter of time before it permeates the broader industry.

The hope is that this new breed of mobile chip can help devices handle more, and more complex, tasks on their own, without calling back to distant data centers: phones recognizing spoken commands without accessing the internet; driverless cars recognizing the world around them with a speed and accuracy that is not possible now.

In other words, a driverless car needs cameras and radar and lasers. But it also needs a brain.

Follow Cade Metz on Twitter: @CadeMetz

A version of this article appears in print on September 17, 2017, on Page BU1 of the New York edition with the headline: Chip Off the Old Block

Share RecommendKeepReplyMark as Last Read

To: koan who wrote (59)9/20/2017 6:08:27 AM
From: Glenn Petersen
   of 229
Can Futurists Predict the Year of the Singularity?

By Peter Rejcek
Singularity Hub
Mar 31, 2017

The end of the world as we know it is near. And that’s a good thing, according to many of the futurists who are predicting the imminent arrival of what’s been called the technological singularity.

The technological singularity is the idea that technological progress, particularly in artificial intelligence, will reach a tipping point to where machines are exponentially smarter than humans. It has been a hot topic of late.

Well-known futurist and Google engineer Ray Kurzweil (co-founder and chancellor of Singularity University) reiterated his bold prediction at Austin’s South by Southwest (SXSW) festival this month that machines will match human intelligence by 2029 (and has said previously the Singularity itself will occur by 2045). That’s two years before SoftBank CEO Masayoshi Son’s prediction of 2047, made at the Mobile World Congress (MWC) earlier this year.

Author of the seminal book on the topic, The Singularity Is Near, Kurzweil said during the SXSW festival that “what’s actually happening is [machines] are powering all of us. …They’re making us smarter. They may not yet be inside our bodies, but by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”

That merger of man and machine—sometimes referred to as transhumanism—is the same concept that Tesla and SpaceX CEO Elon Musk talks about when discussing development of a neural lace. For Musk, however, an interface between the human brain and computers is vital to keep our species from becoming obsolete when the singularity hits.

Musk is also the driving force behind Open AI, a billion-dollar nonprofit dedicated to ensuring the development of artificial general intelligence (AGI) is beneficial to humanity. AGI is another term for human-level intelligence. What most people refer to as AI today is weak or narrow artificial intelligence—a machine capable of “thinking” within a very narrow range of concepts or tasks.

Futurist Ben Goertzel, who among his many roles is chief scientist at financial prediction firm Aidyia Holdings and robotics company Hanson Robotics (and advisor to Singularity University), believes AGI is possible well within Kurzweil’s timeframe. The singularity is harder to predict, he says on his personal website, estimating the date anywhere between 2020 and 2100.

“Note that we might achieve human-level AGI, radical health-span extension and other cool stuff well before a singularity—especially if we choose to throttle AGI development rate for a while in order to increase the odds of a beneficial singularity,” he writes.

Meanwhile, billionaire Son of SoftBank, a multinational telecommunications and Internet firm based in Japan, predicts superintelligent robots will surpass humans in both number and brain power by 2047.

He is putting a lot of money toward making it happen. The investment arm of SoftBank, for instance, recently bankrolled $100 million in a startup called CloudMinds for cloud-connected robots, transplanting the “brain” from the machine to the cloud. Son is also creating the world’s biggest tech venture capitalist fund to the tune of $100 billion.

“I truly believe it’s coming, that’s why I’m in a hurry—to aggregate the cash, to invest,” he was quoted as saying at the MWC.

History of prediction

Kurzweil, Son, Goertzel and others are just the latest generation of futurists who have observed that humanity is accelerating toward a new paradigm of existence, largely due to technological innovation.

There were some hints that philosophers as early as the 19th century, during the upheavals of the Industrial Revolution, recognized that the human race was a species fast-tracked for a different sort of reality. It wasn’t until the 1950s, however, when the modern-day understanding of the singularity first took form.

Mathematician John von Neumann had noted that “the ever-accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

In the 1960s, following his work with Alan Turing to decrypt Nazi communications, British mathematician I.J. Goode invoked the singularity without naming it as such.

He wrote, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”

Science fiction writer and retired mathematics and computer science professor Vernor Vinge is usually credited with coining the term “technological singularity.” His 1993 essay, The Coming Technological Singularity: How to Survive in the Post-Human Era predicted the moment of technological transcendence would come within 30 years.

Vinge explains in his essay why he thinks the term “singularity”—in cosmology, the event where space-time collapses and a black hole forms—is apt: “It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”

Prediction an inexact scienceBut is predicting the singularity even possible?

A paper by Stuart Armstrong et al suggests such predictions are a best guess at most. A database compiled by the Machine Intelligence Research Institute (MIRI), a nonprofit dedicated to social issues related to AGI, found 257 AI predictions from the period 1950-2012 in the scientific literature. Of these, 95 contained predictions giving timelines for AI development.

“The AI predictions in the database seem little better than random guesses,” the authors write. For example, the researchers found that “there is no evidence that expert predictions differ from those of non-experts.” They also observed a strong pattern that showed most AI prognostications fell within a certain “sweet spot”—15 to 25 years from the moment of prediction.

Others have cast doubt that the singularity is achievable in the time frames put forth by Kurzweil and Son.

Paul Allen, co-founder of Microsoft and Institute of Artificial Intelligence, among other ventures, has written that such a technological leap forward is still far in the future.

“[I]f the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress,” he writes, referring to the concept that past rates of progress can predict future rates as well.

Extinction or transcendence?

Futurist Nikola Danaylov, who manages the Singularity Weblog, says he believes a better question to ask is whether achieving the singularity is a good thing or a bad thing.

“Is that going to help us grow extinct like the dinosaurs or is it going to help us spread through the universe like Carl Sagan dreamed of?” he tells Singularity Hub. “Right now, it’s very unclear to me personally.”

Danaylov argues that the singularity orthodoxy of today largely ignores the societal upheavals already under way. The idea that “technology will save us” will not lift people out of poverty or extend human life if technological breakthroughs only benefit those with money, he says.

“I’m not convinced [the singularity is] going to happen in the way we think it’s going to happen,” he says. “I’m sure we’re missing the major implications, the major considerations.

“We have tremendous potential to make it a good thing,” he adds.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: Glenn Petersen who wrote (61)9/20/2017 11:58:28 AM
From: koan
   of 229
Thank you Glenn, that is the most important article I have read this year. I will repost it to my family and friends. A couple of comments I have about the coming singularity: All IMO.

1) It is pointless to argue whether the singularity is good or bad, it is coming and we cannot stop it, so we must prepare for it. And the only way to prepare is to better educate ourselves to understand increasing complex concepts. IMO we are greatly under educated i.e. the average person is capable of much more intellectual sophistication and the primary reason we have not pursued it is that our academic institutions have told us intelligence is something we are born with, and so have put primitive artificial barriers like SAT's and language and math requirement to get into universities; and while there are individual differences, by and large, intelligence is a matter of learning and the longer we learn the more intelligent we will become.

In California, where they have many community colleges with no entrance requirements that allow kids who got a rough start for myriad reasons, a second chance, and if they succeed, which they usually do, to transfer, to higher institutions of learning and they usually do just fine, wheretofore they would have been deemed unfit for entrance. Education makes the average person smarter, all people.

Therefore, we have to learn about intelligence what we have learned about good health, it takes a life time of good nutrition and exercise and so a lifetime of learning. Right now our government should provide free public education starting with preschool, with a hot lunch and continuing through PHD and further if people want. It is in the best interest of society to do so, as it will give a pay back in increased productivity a more sophisticated democracy and reduced crime.

2) When I was young I made the great discovery that we humans do not deduce much, we learn. We have a mind evolved for the survival of a nomadic animal and does not function very well in the technological world we find ourselves in e.g. we are hardwired for pattern recognition, but not probability which we find hard to understand, so the meaning of life, IMO, then, is to find our way out of Plato's illusory cave of dogma we are born into and into the bright sunlight of a well functioning humanitarian existential being which can only be achieved by learning.

This starts by our recognizing education is not elitist, but rather essential for a healthy life and surviving the singularity.


Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: koan who wrote (62)9/20/2017 5:24:35 PM
From: The Ox
   of 229
While you're asking about how people will deal with AI and/or the Singularity, I have similar questions about AI and how "machines" will deal with people. Let's start with EMPATHY. There are enough people in this world that seem to have little or no empathy.... but it seems to me that it will be very important for machines to understand this concept.

Many of this issues you raise are partially explained by the fact that so many people have to struggle through their daily lives that they may not be able to spend the appropriate time "learning" or on/in education vs. having to earn a living or who need to support a family or loved one.

so the meaning of life, IMO, then, is to find our way out of Plato's illusory cave of dogma we are born into and into the bright sunlight of a well functioning humanitarian existential being which can only be achieved by learning.
How does the Singularity encompass what you wrote above and do we end up with a "well functioning humanitarian existential AI being" coming out of this process?

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: Glenn Petersen who wrote (34)9/20/2017 5:29:42 PM
From: The Ox
   of 229
Message 31208383

Meanwhile, across campus, Williams, who is the design lead for Cortana, is building out an ethical design guide for AI to be used inside Microsoft. Williams is, to an absurd degree, a techno-optimist, and she believes that AI’s true magic is that it will make us more human. She talks a lot about how to design empathy into the tools Microsoft builds. “We think about making the human feel more powerful and protected, and supported, and assisted, and loved, and the center of their world,” she says. “AI's job is to amplify the best of society and the best of human behavior, not the worst.”

I ask Williams if she believes AI can really make humans feel more emotionally supported. She’s certain it can. Take a child who has had a bad day at school. She comes home and shares the whole story with a family pet, and feels better. “That gives you this cathartic sense of I've shared something, and I've had a warm, fuzzy hug back from the dog or cat,” says Williams. “But, you know, with AI you can have the same feeling of amplification back... And we see it when Cortana manages to remind you, ‘Hey, you promised you'd send something to your mother today for Mother's Day,’ and you suddenly feel human again.”

Share RecommendKeepReplyMark as Last Read

To: The Ox who wrote (63)9/20/2017 8:36:21 PM
From: koan
   of 229
Good questions: below all IMO;

with regard to public pre school with hot lunch the parent can drop off the kid and go to school or work and everyone wins: the kid gets an early start to education (we may find this is crucial as it may mitigate the brains pruning of important capabilities that takes place in the young) and socialization e.g. hanging up their coat and socializing for good mental health (we are a pack animal). ; the parent wins as they can go to school or work and the society wins because I believe it is impossible for a society to lose money on education. When a society educates a citizen they get that investment back many times over in increased productivity and reduction in crime. Most crime is the result of ignorance.

With regard to AI's behavior my theory is that AI will know all the history of humankind and be able to see the bad, the good and the ugly by our history and stories, and so it will know being good is a good thing to do :)>.

How it relates to Plato (the worlds first existentialist IMO) is that AI is going to be all about knowledge and will also be existential in nature. Plato's Cave is about transforming from a reality of myth to a reality of awareness and knowledge; so it is important we know what we are talking about when we start interacting with it.

Otherwise we will not recognize what it is doing?

<<Message #63 from The Ox at 9/20/2017 5:24:35 PM

While you're asking about how people will deal with AI and/or the Singularity, I have similar questions about AI and how "machines" will deal with people. Let's start with EMPATHY. There are enough people in this world that seem to have little or no empathy.... but it seems to me that it will be very important for machines to understand this concept.

Many of this issues you raise are partially explained by the fact that so many people have to struggle through their daily lives that they may not be able to spend the appropriate time "learning" or on/in education vs. having to earn a living or who need to support a family or loved one.

so the meaning of life, IMO, then, is to find our way out of Plato's illusory cave of dogma we are born into and into the bright sunlight of a well functioning humanitarian existential being which can only be achieved by learning.
How does the Singularity encompass what you wrote above and do we end up with a "well functioning humanitarian existential AI being" coming out of this process?

Share RecommendKeepReplyMark as Last Read

From: koan9/24/2017 12:39:37 PM
   of 229
Fixation on negative and positive things: Great and important concept to think about, that people very seldom do. See bottom of page.

I am reading Homo Deus and as I expected the entire book is written at a sophistication of intellectual perception a full level above what most of us live in i..e. he applies logic to mundane things that makes one realize basic concepts are much larger in scope than we seldom think about e.g. what is happiness, how should we pursue it, what does it represent in the grand scheme of things: or, lol, that we are an animal trying to negotiate a technological environment (created haphazardly) with a mind which is evolved for a nomadic life style, not evolved to survive in.

So we need to create an existential mind that can adapt to the modern world.

AI is going to be operating in the existential reality and so we need to be able to see it.

I recommend Homo Deus for one to see what reality looks like when understood at a full level above the reality people live in as seen in common conversation and perception and life style choices,


An interesting consequence of human mind is our fixation on the negative. After watching this TED talk I realized how true this was for me personally. Experiments show that negativity is twice as 'sticky' as positive thought. Norman Vincent Peale talked about the power of positive thinking. The skeptics and cynics (like me) find great humor in the Life of Brian where they are all on the cross singing - "Always Look on the Bright Side of Life" - a personal favorite. But in that is a powerful message - Try not to get sucked down by the surplus of negativity and remember that bad stuff impacts people twice as much as good and is twice as hard to reverse, leading to a 4x multiplier.

Share RecommendKeepReplyMark as Last Read

To: koan who wrote (59)9/24/2017 1:08:15 PM
From: zzpat
   of 229
I'd be curious to see what part of the 60s historians think changed the world more, birth control (the pill) or all the other issues.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: zzpat who wrote (67)9/25/2017 11:22:21 PM
From: koan
   of 229
They both changed the world greatly. I see no reason to ask that question.

The 60's was transformational because it was a period when the kids and liberals sort of had a collective epiphany which was:" we need to throw out primitive destructive ideas, like racism and misogyny and tribal dogma which prevented so many from self actualization and manifest destiny and replace it with modern existential humanitarian thinking

Continuing to read Homo Deus. He is explaining how the rich are starting to "buy" designer babies and there will be no stopping it. Kids with three parents: Two DNA and one RNA; or fertilize several eggs and pick the one with the least defects..

This is where we are headed, and no going back, so buckle your seat belt and those who choose denial to embracing this reality will have a tough life and if they teach that to their kids they will be left behind.

And if we do not address it here other countries will, so--------?


I'd be curious to see what part of the 60s historians think changed the world more, birth control (the pill) or all the other issues.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)
Previous 10 Next 10 

Copyright © 1995-2018 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.