SI
SI
discoversearch

   Technology StocksCloud computing (On-demand software and services)


Previous 10 Next 10 
From: Glenn Petersen8/2/2017 9:24:50 PM
   of 1405
 
Inside Salesforce’s Quest to Bring Artificial Intelligence to Everyone

Author: Scott Rosenberg
backchannel
l08.02.1707:00 am




Shubha Nabar, director of data science, at the Salesforce office in San Francisco.
Photography: Jason Henry. Photo direction: Michelle Le.
____________________________



Optimus Prime—the software engine, not the Autobot overlord—was born in a basement under a West Elm furniture store on University Avenue in Palo Alto. Starting two years ago, a band of artificial-intelligence acolytes within Salesforce escaped the towering headquarters with the goal of crazily multiplying the impact of the machine learning models that increasingly shape our digital world—by automating the creation of those models. As shoppers checked out sofas above their heads, they built a system to do just that.

They named it after the Transformers leader because, as one participant recalls, “machine learning is all about transforming data.” Whether the marketing department thought better of it, or the rights weren’t available, the Transformers tie-in didn't make it far out of that basement. Instead, Salesforce licensed the name of a different world-transforming hero—and dubbed its AI program Einstein.

The pop culture myths the company has invoked for its AI effort—the robot leader; the iconic genius—represent the kind of protean powers the technology is predicted to attain by both its most ardent hypesters and its gloomiest critics. Salesforce stands firmly on the hype side of this divide—no one cheers louder, especially not in AI promotion. But the company’s actual AI program is more pragmatic than messianic or apocalyptic.

This past March, Salesforce flipped a switch and made a big chunk of Einstein available to all of its users. Of course it did. Salesforce has always specialized in putting advanced software into everyday businesses' hands by moving it from in-house servers to the cloud. The company’s original mantra was “no software.” Its customers wouldn’t have to purchase and install complex programs and then pay to maintain and upgrade them—Salesforce would take care of all that at its data centers in the cloud. That seems obvious now, but when Salesforce launched in 1999 it sounded as revolutionary as AI does to us today.



Jason Henry
_______________________

Talkin’ revolution has been good for Salesforce. The firm now has 26,000 employees worldwide, and it has pasted its name on the city’s new tallest skyscraper. Its founder, Marc Benioff, is a philanthropist who has put his own name on hospitals and foundations. Despite all this, in its own world of B2B (business-to-business) software, Salesforce still holds onto its scrappy upstart self-image.

So naturally, when the AI trend took off, the people inside the company and the experts they recruited coalesced around an idealistic mission. The team set out to create “AI for everyone”—to make machine learning affordable for companies who’ve been priced out of the market for experts. They promised to “democratize” AI.

That sounds a bit risky! Can we trust the people with such awesome powers? (Cut to chorus of Elon Musk, Stephen Hawking, and Nick Bostrom singing a funeral mass for humanity.) But what Salesforce has in mind isn’t all that subversive. Its Einstein isn’t the guy who overthrew centuries of orthodox physics and enabled the H-bomb; he’s just a cute brainiac who can answer all your questions. Salesforce’s populist slogan is simply about making a new generation of technology accessible to mere mortals. Other, bigger companies—Microsoft, Google, Amazon—may outgun Salesforce in sheer research muscle, but Salesforce promises to put a market advantage into its customers’ hands right now. That begins with the mundane business of ranking lists of sales leads.



“What do I work on next?”

Most of us ask that question many times every day. (And too many of us end up answering, “Check Facebook” or “See if Trump tweeted again!”) To-do apps and personal productivity systems offer some help, but often turn into extra work themselves. What if artificial intelligence answered the “next task” question for you?

That’s what the Salesforce AI team decided to offer as Einstein’s first broadly available, readymade tool. Today Salesforce offers all kinds of cloud-based services for customer service, ecommerce, marketing and more. But at its root, it’s a workaday CRM (customer relationship management) product that salespeople use to manage their leads. Prioritizing these opportunities can get complicated fast and takes up precious time. So the Einstein Intelligence module—a little add-on column at the far right of the basic Salesforce screen—will do it for you, ranking them based on, say, “most likely to close.” For marketers, who also make up a big chunk of Salesforce customers, it can take a big mailing list and sort individual recipients by the likelihood that they’ll open an email.

But wait, what qualifies this as artificial intelligence? Anyone can tell a spreadsheet to sort a list based on different factors. The machine learning difference is simple but profound: The program studies the history of the data and figures out for itself which factors best predict the future—and then it keeps adjusting its model based on new information over time. The more data, the subtler and more powerful the answers, which is why Einstein can work not only from columns of basic Salesforce data but also from information like sales email threads that it parses and images that it reads.



An Einstein character at the Salesforce office in San Francisco.
Jason Henry
___________________________

Salesforce director of product marketing Ally Witherspoon uses the example of a solar-panel sales outfit using the machine learning tool to discover that a key factor in predicting a customer’s chances of saying “yes” is whether the house’s roof is pitched in a solar-friendly way. Further down the road, a different deep learning-style program could check satellite photos of different properties and automatically tag homes by roof geometry.

This roof info might start out as a major ingredient in how the machine learning program sorts its list—and, in one of Einstein’s nifty design flourishes, users can click to reveal which factors shaped each priority scoring. If users are going to trust the tool, that kind of transparency helps. But what happens when all the sales reps have learned to ignore the folks whose roofs are flat?

As Salesforce President of Technology Srini Tallapragada explains, “At a certain point, a column of data can become useless—it becomes a best practice, so it loses predictive value. The model has to keep changing.”



That is cool. It’s also pretty standard-issue machine learning tech for 2017. But to get it up and running at your company, you’d need to spend a ton of time and effort building a model that understands what’s important in your business, and then cleaning up your data to get good results. That’s the reason your bank, your insurance company, and your doctor aren’t all using AI already, explains Vitaly Gordon, who left LinkedIn in 2014 to become one of Salesforce's machine learning pioneers. Ironically, for a field predicated on the ideas of automating human work, “It’s an access to people problem,” Gordon says. These companies probably know more about you than Facebook or Google, but they can’t compete for the data scientists who know how to mine the mountains of information.



Vitaly Gordon, VP of data science and one of the earliest Salesforce AI engineers.
Jason Henry
_____________________

Right now, the demand for these experts is like the run on internet routing gurus in the ’90s or SEO experts in the 2000s—even crazier than the Bay Area housing market. If you’re the likes of Facebook, Google, or Amazon, you can hire the field’s leading lights and put them to work optimizing algorithms and inventing new ways of serving billions of customers with more artificial intelligence. If you’re anyone else, you’re pretty much screwed. You’ll either pay a fortune to a giant consultancy to custom-build a machine learning system, or you’ll watch from the sidelines. What Salesforce is selling is the idea that if your business is in its hands, you’re going to get the benefit of AI without fighting for that talent to customize it for you. It all comes in the box—or would, if there were a box. (Our metaphors need to keep changing, too.)

Salesforce has 150,000 customers, most of whom have customized the system for their own needs and kinds of data. The Salesforce “multi-tenant” approach means that each company’s data is kept separately, and when a customer adds a custom data field, Salesforce doesn’t even know the nature of the information.

To bolt Einstein onto each of these businesses’ unique software configurations, Salesforce’s AI braintrust realized that it needed a new approach. “There aren’t enough data scientists in the world to build all the predictive models we need,” says John Ball, Salesforce Einstein’s general manager. Just as AT&T realized a century ago that if it stuck with manual operators, everyone in the US would end up sitting at a switchboard, Salesforce saw that automation was inevitable.

This is where Optimus Prime comes in. (Inside Salesforce, developers still use that name.) It’s the system that automates the creation of machine learning models for each Salesforce customer so that data scientists don’t have to spend weeks babysitting each new model as it is born and trained to deliver good answers. Optimus Prime is, in a sense, an AI that builds AIs—and a tool whose recursive nature is both beautiful and unsettling.



John Ball, general manager for Salesforce Einstein.
Jason Henry
________________________________

“Normally a data scientist studying one problem might take several weeks to a month to come up with a good model for a problem,” explains Shubha Nabar, Salesforce’s director of data science. “With this automated layer, it takes just a couple of hours.”

Today, the fruits of Optimus Prime are chiefly available in neatly packaged features of the Salesforce cloud applications that customers can turn on by checking a box. Next, Salesforce plans to open up the technology by steps. First, users will be able to extend Einstein’s capabilities more widely to more of their customized data. Then, a point-and-click interface will let non-programmers build custom apps for users. “We want to allow an admin—not a data scientist, not even really a developer—to predict any field in any object,” Ball says. Even further down the line, Salesforce intends to expose more of the guts of its machine learning system for external developers to play with. At that point, it will be competing directly with all the AI heavyweights, like Google and Microsoft, to dominate the business market.



Salesforce recently released research that claims AI’s impact through CRM software alone will add over $1 trillion to GDPs around the globe and create 800,000 new jobs. The company has gone all-in on AI since it first announced Einstein in 2016. Benioff said then, “AI is the next platform—all future applications, all future capabilities for all companies will be built on AI.”

Benioff even told analysts on a quarterly earnings call that he uses Einstein at weekly executive meetings to forecast results and settle arguments: “I will literally turn to Einstein in the meeting and say, ‘OK, Einstein, you’ve heard all of this, now what do you think?’ And Einstein will give me the over and under on the quarter and show me where we’re strong and where we’re weak, and sometimes it will point out a specific executive, which it has done in the last three quarters, and say that this executive is somebody who needs specific attention.”

That may sound a little Big Brother-ish, but everyone I spoke with at Salesforce is careful to keep the AI talk friendly. Einstein isn’t after your job—it just wants to help you work smarter. Nonetheless, the AI universe is mined with vague fears about the future of work and questions about bias, privacy, and data integrity. As Salesforce expands its AI projects, it will inevitably tangle with them.

One of Salesforce’s advantages in attracting talent in the field is that, under Benioff’s command, the company has built a strong reputation for having a social conscience. It’s the anti-Uber. That was one of the factors that mattered to Richard Socher, an AI hotshot whose company, MetaMind, was acquired by Salesforce a year ago.

Socher, who now leads Salesforce’s research efforts, specializes in deep learning techniques that help software understand natural language and images. He teaches a wildly popular AI course at Stanford, and co-publishes papers with titles like “Pointer Sentinel Mixture Models” and “Your TL;DR by an AI: A Deep Reinforced Model for Abstractive Summarization.”





Richard Socher, who heads Salesforce AI research.
Jason Henry
_____________________________

With his unruly straw mop of hair, Socher still looks like the grad student he was not that long ago—and he has a youthful enthusiasm for testing the limits of what we think AI can handle.

“I want to be able to have more and more of a real conversation in the future with a system that clearly has tackled a diverse range of intelligent capabilities,” he says. For now, that means building learning routines that can “read” arbitrary paragraphs and then correctly answer questions about them, and exploring new methods of building AI systems that can do more than one thing at a time.

As the technology grows more powerful, Socher says, we can’t put off the conversations about its ethics. “AI is only as good as the data it gets,” he says. “If your data has certain suboptimal human biases in it, your AI will pick it up. And then you automate it, and it makes that same mistake hundreds of millions of times. You need to be very careful.”



Sales work can be painfully hard, and salespeople have to stay positive even when their work gets ugly and desperate. Salesforce has thrived for two decades by embracing that optimism. Where Google’s AI efforts are all about perfecting information access and Facebook’s aim to connect people more intelligently, Salesforce wants to make the world a better place by helping customers smarten up their work days.

At times, Salesforce’s portrait of a future powered by its AI sounds too good to be true. For a down-to-earth assessment of the company’s plans from an outsider, I turned to Pedro Domingos, an AI expert at the University of Washington and author of The Master Algorithm.

Domingos says Salesforce is “a bit of a latecomer” to the field and may find it harder than it expects to integrate AI fully at deeper levels of its products. But he thinks the company is on the right track: At this stage in AI’s evolution, there’s more to be gained from putting basic tools in more people’s hands than from squeezing an extra few percentages of efficiency from an algorithm.

Domingos also says that Salesforce’s relatively tardy entrance to AI—compared with, say, IBM or Google—shouldn’t necessarily hold it back. “They’re still a small player in this space. But other companies came from behind and got pretty far pretty quickly—look at Facebook. Just because you’re a late starter doesn’t mean that in a few years you can’t become a leader.”

Salesforce faces a crowded field in the fight to put AI tools to work on behalf of the warm-handshake crowd. Competitors include giants like Microsoft (with its LinkedIn Sales Navigator) and Oracle, as well as smaller rivals like SugarCRM and startups like Conversica (the latter of which uses AI to automate conversations with incoming sales leads). If Salesforce does succeed in moving to the front rank of today’s crazy corporate AI race, company insiders point to one advantage as its not-so-secret weapon: its well-tended warehouses of consistently labeled and organized customer data.

Those much-competed-for, highly paid data scientists everyone is trying to hire? They spend enormous amounts of time today “preparing data,” which means figuring out how to prep piles of information so that it can be digested by machine learning programs and produce good results. There is a whole lot of grooming and massaging of information that has to take place before most AI systems can even begin to start making predictions.

This represents an ironic breakdown in the ethos of automation that underlies AI. Too often today, Domingos points out, the IBMs and Accentures of the world are just throwing armies of experts at their customers’ problems. “What they do at the end of the day is, they actually have human labor do this stuff,” he says, “That makes money but is not scalable.”

But Salesforce customers have all already entered their data into a single software platform, even if many of them have added their own custom flourishes. “People put everything in there,” says Salesforce technology president Tallapragada. Salesforce doesn’t look at the content of its customers’ data, but it does know how a lot of it is organized. “The Salesforce advantage is the metadata. That lets us automate stuff,” says data science director Nabar.

For all the utopian dreams and Skynet nightmares that today’s advances in artificial intelligence provoke, the winners and losers in this transition will probably be determined by what computer scientists call “data hygiene.” In other words: No matter how smart our programs get in the AI future, tidiness still counts. Clean up after your work, and remember to wash your files before you leave.

Let others conquer Go and solve knotty theorems. Salesforce could achieve victory through neat power.



wired.com

Share RecommendKeepReplyMark as Last Read


From: Sam8/22/2017 2:25:24 PM
1 Recommendation   of 1405
 
Microsoft No Longer a PC Company with Deals Like Halliburton, Says Credit Suisse

Microsoft's "no longer your father's Microsoft," writes Michael Nemeroff Credit Suisse, citing deals such as today's win for Microsoft to be the cloud provider for Halliburton's oil and gas exploration and production efforts.

By Tiernan Ray
Aug. 22, 2017 12:10 p.m. ET

barrons.com

Shares of Microsoft ( MSFT) are up 88 cents, or 1.2%, at $73.03, after the company this morning announced a deal with oil and gas giant Halliburton ( HAL), in which Microsoft’s “Azure” cloud computing service will host the latter’s “iEnergy” service for exploration and production.

In response, Credit Suisse’s Michael Nemeroff reiterates an Outperform ratting on Microsoft shares, writing that the company is moving away from its “legacy tools and cyclical PC business."

Among details of the collaboration, Microsoft said it will "allow the companies to apply voice and image recognition, video processing and AR/Virtual Reality to create a digital representation of a physical asset using Microsoft’s HoloLens and Surface devices,” after gathering data from sensors placed on infrastructure.

continues at the link

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen8/26/2017 8:58:33 PM
   of 1405
 
AI Is Taking Over the Cloud

Cloud storage company Box is using Google’s vision technology to make its service considerably smarter.

by Will Knight
MIT Technology Review
August 17, 2017



Box CEO Aaron Levie.
__________________________________

The cloud is getting smarter by the minute. In fact, it will soon know more about the photos you’ve uploaded than you do.

Cloud storage company Box announced today that it is adding computer-vision technology from Google to its platform. Users will be able to search through photos, images, and other documents using their visual components, instead of by file name or tag. “As more and more data goes into the cloud, we’re seeing they need more powerful ways to organize and understand their content,” says CEO Aaron Levie.

Computer-vision technology has improved remarkably over the past few years thanks to a machine-learning approach known as deep learning (see “ 10 Breakthrough Technologies 2013: Deep Learning”). A deep neural network—loosely inspired by the way neurons process and store information—can learn to recognize categories of objects, such as a “red sweater” or a “pickup truck.” Ongoing research, including work from Google’s researchers, is improving the ability of algorithms to describe what’s happening in images.

Box’s computer-vision feature could be a good way for companies to dip their toes into AI and machine learning. It removes the need to manually annotate thousands of images, and it will make it possible to search through older files in ways that might not have occurred to anyone during tagging. Levie says one company testing the technology is using it to search images for particular people.

The announcement is the latest sign that cloud computing is being reinvented through machine learning and artificial intelligence. AI is already the weapon of choice in the battle to dominate cloud computing, with companies that offer on-demand computing—Google, Amazon, and Microsoft among them—all increasingly touting added machine-learning features.

Fei-Fei Li, chief scientist of Google Cloud and a professor at Stanford University who specializes in computer vision and machine learning, said in a statement that the announcement shows how broadly available AI technology is becoming. “Ultimately it will democratize AI for more people and businesses,” Li said.

Levie says his company is looking at adding machine learning for other types of content. This could include audio and video, but also text, for which an algorithm could add semantic analysis, making it possible to search by the meaning of a document rather than specific keywords.

It’s also significant that Box is relying on computer vision from Google, rather than technology developed in-house. This reflects the fact that a few big players have come to dominate the more fundamental aspects of AI like computer vision, voice recognition, and natural-language processing. “If you think about the strength that Google has in image recognition, it would just be strategically unwise for us to try to compete with them,” Levie says. He says his company’s researchers are exploring ways of applying machine learning to the behavior of its customers. This process might reveal ways to optimize the Box service, or help identify tasks that could be ripe for automation, Levie says.

Google’s Cloud Vision API can recognize many thousands of everyday objects in images. However, some customers might need the ability to recognize and search through specific types of images, for example medical or architectural images. So Box’s researchers are exploring ways for customers to train their own vision systems if necessary.

technologyreview.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen8/29/2017 10:02:57 AM
2 Recommendations   of 1405
 
More of an evolution than a complete disruption:

It's Time to Think Beyond Cloud Computing

Jeremy Hsu
backchannel
08.23.1706:50 am



Fasten your harnesses, because the era of cloud computing’s giant data centers is about to be rear-ended by the age of self-driving cars. Here’s the problem: When a self-driving car has to make snap decisions, it needs answers fast. Even slight delays in updating road and weather conditions could mean longer travel times or dangerous errors. But those smart vehicles of the near-future don’t quite have the huge computing power to process the data necessary to avoid collisions, chat with nearby vehicles about optimizing traffic flow, and find the best routes that avoid gridlocked or washed-out roads. The logical source of that power lies in the massive server farms where hundreds of thousands of processors can churn out solutions. But that won’t work if the vehicles have to wait the 100 milliseconds or so it usually takes for information to travel each way to and from distant data centers. Cars, after all, move fast.

That problem from the frontier of technology is why many tech leaders foresee the need for a new “edge computing” network—one that turns the logic of today’s cloud inside out. Today the $247 billion cloud computing industry funnels everything through massive centralized data centers operated by giants like Amazon, Microsoft, and Google. That’s been a smart model for scaling up web search and social networks, as well as streaming media to billions of users. But it’s not so smart for latency-intolerant applications like autonomous cars or mobile mixed reality.

“It’s a foregone conclusion that giant, centralized server farms that take up 19 city blocks of power are just not going to work everywhere,” says Zachary Smith, a double-bass player and Juilliard School graduate who is the CEO and cofounder of a New York City startup called Packet. Smith is among those who believe that the solution lies in seeding the landscape with smaller server outposts—those edge networks—that would widely distribute processing power in order to speed its results to client devices, like those cars, that can’t tolerate delay.



Packet’s scattered micro datacenters are nothing like the sprawling facilities operated by Amazon and Google, which can contain tens of thousands of servers and squat outside major cities in suburbs, small towns, or rural areas, thanks to their huge physical footprints and energy appetites. Packet’s centers often contain just a few server racks—but the company promises customers in major cities speedy access to raw computing power, with average delays of just 10 to 15 milliseconds (an improvement of roughly a factor of ten). That kind of speed is on the “must have” lists of companies and developers hoping to stream virtual reality and augmented reality experiences to smartphones, for example. Such experiences rely upon a neurological process—the vestibulo-ocular reflex—that coordinates eye and head movements. It occurs within seven milliseconds, and if your device takes 10 times that long to hear back from a server, forget about suspension of disbelief.

Immersive experiences are just the start of this new kind of need for speed. Everywhere you look, our autonomously driving, drone-clogged, robot-operated future needs to shave more milliseconds off its network-roundtrip clock. For smart vehicles alone, Toyota noted that the amount of data flowing between vehicles and cloud computing services is estimated to reach 10 exabytes per month by 2025.

Cloud computing giants haven’t ignored the lag problem. In May, Microsoft announced the testing of its new Azure IoT Edge service, intended to push some cloud computing functions onto developers’ own devices. Barely a month later, Amazon Web Services opened up general access to AWS Greengrass software that similarly extends some cloud-style services to devices running on local networks. Still, these services require customers to operate hardware on their own. Customers who are used to handing that whole business off to a cloud provider may view that as a backwards step.

US telecom companies are also seeing their build-out of new 5G networks—which should eventually support faster mobile data speeds—as a chance to cut down on lag time. As the service providers expand their networks of cell towers and base stations, they could seize the opportunity to add server power to the new locations. In July, AT&T announced plans to build a mobile edge computing network based on 5G, with the goal of reaching “single-digit millisecond latency.” Theoretically, data would only need to travel a few miles between customers and the nearest cell tower or central office, instead of hundreds of miles to reach a cloud data center.

“Our network consists of over 5,000 central offices, over 65,000 cell towers, and even several hundred thousand distribution points beyond that, reaching into all the neighborhoods we serve,” says Andre Fuetsch, CTO at AT&T. “All of a sudden, all those physical locations become candidates for compute.”

AT&T claims it has a head start on rival telecoms because of its “network virtualization initiative,” which includes the software capability to automatically juggle workloads and make good use of idle resources in the mobile network, according to Fuetsch. It’s similar to how big data centers use virtualization to spread out a customer’s data processing workload across multiple computer servers.

Meanwhile, companies such as Packet might be able to piggyback their own machines onto the new facilities, too. ”I think we’re at this time where a huge amount of investment is going into mobile networks over the next two to three years,” Packet’s Smith says. “So it’s a good time to say ‘Why not tack on some compute?’” (Packet’s own funding comes in part from the giant Japanese telecom and internet conglomerate Softbank, which invested $9.4 million in 2016.) In July 2017, Packet announced its expansion to Ashburn, Atlanta, Chicago, Dallas, Los Angeles, and Seattle, along with new international locations in Frankfurt, Toronto, Hong Kong, Singapore, and Sydney.

Packet is far from the only startup making claims on the edge. Austin-based Vapor IO has already begun building its own micro data centers alongside existing cell towers. In June, the startup announced its “Project Volutus” initiative, which includes a partnership with Crown Castle, the largest US provider of shared wireless infrastructure (and a Vapor IO investor). That enables Vapor IO to take advantage of Crown Castle’s existing network of 40,000 cell towers and 60,000 miles of fiber optic lines in metropolitan areas. The startup has been developing automated software to remotely operate and monitor micro data centers to ensure that customers don’t experience interruptions in service if some computer servers go down, says Cole Crawford, Vapor IO’s founder and CEO.



Don’t look for the edge to shut down all those data centers in Oregon, North Carolina, and other rural outposts: Our era’s digital cathedrals are not vanishing anytime soon. Edge computing’s vision of having “thousands of small, regional and micro-regional data centers that are integrated into the last mile networks” is actually a “natural extension of today’s centralized cloud,” Crawford says. In fact, the cloud computing industry has extended its tentacles toward the edge with content delivery networks such as Akamai, Cloudflare, and Amazon CloudFront that already use “edge locations” to speed up delivery of music and video streaming.

Nonetheless, the remote computing industry stands on the cusp of a “back to the future” moment, according to Peter Levine, general partner at the venture capital firm Andreessen Horowitz. In a 2016 video presentation, Levine highlighted how the pre-2000 internet once relied upon a decentralized network of PCs and client servers. Next, the centralized network of the modern cloud computing industry really took off, starting around 2005. Now, demand for edge computing is pushing development of decentralized networks once again (even as the public cloud computing industry’s growth is expected to peak at 18 percent this year, before starting to taper off).

That kind of abstract shift is already showing up, unlocking experiences that could only exist with help from the edge. Hatch, a spinoff company from Angry Birds developer Rovio, has begun rolling out a subscription game streaming service that allows smartphone customers to instantly begin playing without waiting on downloads. The service offers low-latency multiplayer and social gaming features such as sharing gameplay via Twitch-style live-streaming. Hatch has been cagey about the technology it developed to slash the number of data-processing steps in streaming games, other than saying it eliminates the need for video compression and can do mobile game streaming at 60 frames per second. But when it came to figuring out how to transmit and receive all that data without latency wrecking the experience, Hatch teamed up with—guess who—Packet.

“We are one of the first consumer-facing use cases for edge computing,” says Juhani Honkala, founder and CEO of Hatch. “But I believe there will be other use cases that can benefit from low latency, such as AR/VR, self-driving cars, and robotics.”

Of course, most Hatch customers will not know or care about how those micro datacenters allow them to instantly play games with friends. The same blissful ignorance will likely surround most people who stream augmented-reality experiences on their smartphones while riding in self-driving cars 10 years from now. All of us will gradually come to expect new computer-driven experiences to be made available anywhere instantly—as if by magic. But in this case, magic is just another name for putting the right computer in the right place at the right time.

“There is so much more that people can do,” says Packet’s Smith, “than stare at their smartphones and wait for downloads to happen.” We want our computation now. And the edge is the way we’ll get it.

wired.com


Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Glenn Petersen who wrote (1397)8/29/2017 11:41:43 PM
From: Sam
   of 1405
 
Some fallout from Amazon's WFM takeover--

Target is plotting a big move away from AWS as Amazon takes over retail
  • Target is moving quickly to pull away from AWS through the rest of this year.
  • Microsoft, Google and Oracle are all pursuing the cloud business of big retailers.
  • AWS still has dominant market share.
Christina Farr | Ari Levy
Published 8 Hours Ago | Updated 5 Hours Ago

cnbc.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Sam who wrote (1398)8/30/2017 6:42:29 AM
From: Glenn Petersen
1 Recommendation   of 1405
 
Amazon has a target on its back.

Another anti-Amazon alliance announced yesterday:

Google and VMware are teaming up with a $2.8 billion startup to get an edge in the cloud wars with Amazon

businessinsider.com




Share RecommendKeepReplyMark as Last Read


From: FUBHO9/18/2017 9:27:21 PM
   of 1405
 
infoq.com

Using the new Web App for Containers capability, developers are able to

Pull container images from GitHub, Docker Hub or a private Azure Container Registry, and Web App for Containers will deploy the containerized app with your preferred dependencies to production in seconds. The platform automatically takes care of OS patching, capacity provisioning, and load balancing.


Share RecommendKeepReplyMark as Last Read


From: FUBHO9/19/2017 4:39:13 PM
1 Recommendation   of 1405
 
IBM Launches Its Own Shippable Cloud Data Migration Device





Hardened storage device offers 120 TB and uses AES 256-bit encryption


One of the barriers for enterprises storing data in the cloud is data migration, a process that has traditionally been slow and costly, hindered by network limitations. IBM wants to remove this barrier for its customers with a new cloud migration solution designed for moving massive amounts of data to the cloud.

IBM Cloud Mass Data Migration is a shippable storage device, which offers 120 TB and uses AES 256-bit encryption. The device also uses RAID-6 to ensure data integrity, and is shock-proof. The device is a flat-rate, and includes overnight round-trip shipping.

The device is about the size of a suitcase, and has wheels so it can be easily moved around a data center, Michael Fork, distinguished engineer and director, cloud infrastructure, IBM Watson and cloud platform said. Fork said that the solution allows customers to migrate 120 TB in seven days.

“When you actually look at the networking aspects of this, for example if you were to transfer 120TB over a 100 Mbps internet connection, that would take 100 or more days,” he said.


Similar options on the market include the AWS Snowball Edge, which was launched last year and offers 100 TB of usable storage capacity. In June, Google introduced Transfer Appliance, which offers up to 480TB in 4U or 100TB in 2U of raw data capacity. In the chart below, Google broke down how long data transfer can take over different connections.

“Previously we supported two main transfer methods. One was an IBM solution called IBM Data Transfer service, and this allows you to ship us a USB hard drive or CD/DVD, and so you could migrate in up to 10 TBs of data pretty easily using that service,” Fork said. “The other solution IBM supports is through IBM Aspera, a network-based transfer.”

IBM Cloud Mass Data Migration is designed for any customer that has large amounts of data to migrate to IBM Cloud, Fork said, pointing to customers who move large SAP datasets or datasets for use with IBM Watson or other cognitive services.

“VMware customers are bringing to IBM Cloud large amounts of data, VMDKs, machine images, they need a fast and efficient way to move large amounts of those,” he said.

Beyond Lights-Out: Future Data Centers Will Be Human-Free
A new generation of data centers will be optimized for extreme efficiency, not for human access or comfort.
Critical Thinking, a weekly column on innovation in data center infrastructure. More about the column and the author here.

The idea of a “lights-out” data center is not new, but it is evolving. Operators such as Hewlett Packard Enterprise and AOL have been long-term proponents of remote monitoring and management to reduce, or entirely replace, the need for dedicated on-site staff. The most well-known current advocate is probably colocation provider EdgeConneX that has integrated a lights-out approach into the fabric of its business.

However, despite the efficiency benefits, lights-out, or “dark,” sites are still viewed with skepticism in some quarters; not having staff readily on-hand to deal with outages is deemed just too high-risk. Data center certification body Uptime Institute, for example, recommends that one to two qualified staff are needed on-site at all times to support the safe operation of a Tier III or IV facility.

But while lights-out may be a niche option now, developments in remote monitoring, analytics, AI, and robotics could eventually see it taken much further.


These technologies combined with the elimination of all concessions to human comfort will enable ever more efficient and available data centers, some experts argue. Technology analyst firm 451 Research recently coined the phrase “ Datacenter as a Machine” (subscription required) to define unstaffed facilities that are primarily designed, built, and operated as units of IT rather than buildings. “As data centers become more complex, with tighter software-controlled integration between components, they will increasingly be viewed as complex machines rather than real estate,” the analyst group argues.

A facility designed and optimized exclusively for IT, rather than human operators, could enjoy a range of advantages over more conventional sites:

Improved cooling efficiency: There is good evidence that facilities could be operated at higher temperatures and humidity without impacting the reliability and performance of IT equipment. Progressive operators have made efforts to move into the upper reaches of ASHRAE’s recommended, or even allowable, temperature ranges. But the approach isn’t more pervasive due in part to its impact on human comfort. IT equipment may be functional at 80F and up, but it’s not a pleasant working environment for staff. Other highly efficient forms of cooling could make things even more uncomfortable. For example, close-coupled cooling technologies, such as direct liquid immersion, capture more than 90 percent of the IT heat load in a dielectric fluid but make no concession for the human operator. For the technology to become widely deployed in conventional sites additional, inefficient, perimeter cooling would be required in some locations just to keep the operators cool.

Better capacity management: Everything from rack height to access-aisle width is designed to make it easier for staff to install and maintain equipment rather than to optimize for efficiency. But if this space requirement was eliminated, equipment (power and cooling permitting) could be fitted into a much smaller footprint with, for example, potentially much higher, robot-accessible racks.

Reduced downtime and improved safety: According to a 2016 study by the Ponemon Institute, human error was the second-highest cause (behind power chain failures) of data center downtime. Electrocution – via arc-flash or other causes – also remains a real and present threat without the correct safety precautions. Use of hypoxic fire suppression – lowering oxygen levels – also has benefits for fire safety but again makes for a difficult working environment. A facility that was essentially off-limits to all but periodic or emergency access by qualified specialists could reduce the potential for human error and minimize the risk of injury to inexperienced staff.

But if on-site staff were effectively designed out of facilities, who or what would replace them? The kind of pervasive remote monitoring platforms already used at lights-out sites -- such as EdgeConneX’s edgeOS -- would likely play an instrumental role. Emerging tools, such as data center management as a service (DMaaS), which is effectively cloud-based data center infrastructure management, or DCIM, software – could also enable suppliers to take remote control (including predictive maintenance) of specific equipment or even an entire site. Eventual integration with AI/machine learning could also lead to more IT and facilities tasks being automated and self-regulated. Robotics is also likely to play a greater role in future data center management. Indeed, if facilities are designed to optimize space, then so-called dexterous robots may be the only way to access some parts of the site.

But despite the potential, a number of impediments will need to be overcome before unstaffed data centers become widely adopted. The biggest of these is obviously the perception that such designs would introduce additional risk. As such, early adopters would probably be limited to companies that are already comfortable with some form of lights-out approach. Facilitating technologies, such as DMaaS, AI-driven DCIM, and advanced robotics, are also still very nascent.

But there are still good reasons to think that, in specific use cases, unstaffed sites will eventually become the norm. For example, new micro-data center form factors to support edge computing are expected to proliferate in the next five to ten years and are likely to be monitored remotely and only require periodic visits from specialist maintenance staff.

Ihe prognosis doesn’t necessarily have to be all bad for facilities staff. To be sure, there will be fewer in-house positions in the future, but specialist third-party facilities management services providers – capable of emergency or periodic visits -- could expand headcount to meet the expected growth in new colocation and cloud capacity.

Ironic as it may sound, the future looks rather bright for the next generation of lights-out data centers.

Share RecommendKeepReplyMark as Last Read


From: FUBHO10/3/2017 3:38:39 PM
   of 1405
 

1 Million Container Instances in 2 Minutes Draws Rare Applause

sdxcentral.com



October 2, 2017
11:09 am PT
ORLANDO, Florida – Big numbers drew applause to what are typically rather staid affairs at the Microsoft Ignite event last week.

During a panel session entitled: “Orchestrating 1 million containers with Azure Service Fabric,” Mani Ramaswamy, principal program manager at Microsoft, did indeed show the creation and orchestration of one million containers. Even more impressive was that the demonstration took less than two minutes to complete.


Though this drew audience applause from what are typically sleepy afternoon sessions on the last real day of the conference, Ramaswamy seemed to want a bit more.

“I expected dancing in the aisles,” Ramaswamy joked (or at least it seemed like he was joking). He added that the more impressive part of the platform was that it was able to hold the reliability and availability of the instances at hyperscale.

“You never again have to worry about whether the platform can meet scale demands,” he said. “It’s the application that you have to worry about, not the platform.”

A container instance is a single container that is designed to start within seconds and can be billed by the provider in second increments. That billing typically includes the cost of turning up an instance, and charges for the processing and memory needed to run the instance.

Containers can run with a public or private IP address, with the former able to support consumer services accessed via the Internet, and the latter typically used for internal processes.

Ramaswamy said some of Microsoft’s competitors have been able to show public demonstrations of “a few hundred thousand” container instances created. Those rivals would seem to include Amazon, which has its ECS container instance.

The demonstration was the crescendo to Ramaswamy’s presentation on the flexibility and capabilities of Microsoft’s Azure Service Fabric.

Microsoft, during the show, launched general availability of its Azure Service Fabric on Linux. The product is a platform-as-a-service (PaaS) that supports running containerized applications on Service Fabric for Windows Server and Linux.

Developers can manage container images, allocate resources, run service discovery, and tap insight from operation management suite (OMS) integration. This work can then be ported between Windows Server and Linux without needing to alter code.


While the product can support both Windows and Linux, it can’t support both at the same time. Ramaswamy said Microsoft was looking to add that form of support in the coming months.

Microsoft announced last year initial general availability of Azure Fabric Service.

Share RecommendKeepReplyMark as Last Read


From: The Ox10/5/2017 8:07:11 AM
   of 1405
 

m.eet.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.