SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksCloud, edge and decentralized computing


Previous 10 Next 10 
From: Glenn Petersen12/10/2020 11:55:17 PM
   of 1641
 
Amazon Wants to Train 29 Million People to Work in the Cloud

New programs seek to help people from Montana to Nigeria attain roles ranging from tech support to machine learning

By Chip Cutter
Wall Street Journal
Updated Dec. 10, 2020 3:41 pm ET

Amazon. AMZN -0.09% com Inc. announced an effort Thursday aimed at helping 29 million people world-wide retrain by 2025, giving them new skills for cloud-computing roles as the pandemic upends many careers.

The online giant committed $700 million last year to reskilling 100,000 of its own workers in the U.S. The new effort will build on existing programs and include new ones in partnership with nonprofits, schools and others.

Amazon’s latest initiative is geared toward those who aren’t already employed at the company. The idea, it says, is to equip people with the education needed to work in cloud-computing at a number of employers seeking to fill high-tech positions. While some participants might find jobs at Amazon, it is more likely they would get hired at other companies, including many that use Amazon Web Services, the online retailer’s cloud division.

The free training could support those looking to prepare for entry-level support positions or in helping existing engineers broaden their expertise in areas like machine learning or cybersecurity, the company says.

Teresa Carlson, a vice president at Amazon Web Services, says the company hears almost daily from its clients that they can’t find the right people to fill technical jobs as many organizations move their applications and processes to the cloud.

“We need our customers to have the right skills if they’re going to go through a digital transformation,” she said.

Employment data showed the pace of hiring slowing substantially in November. WSJ’s Eric Morath breaks down why the labor-market recovery has lost its momentum. Photo: Jeff Chiu/Associated Press
Amazon declined to disclose the cost of its new training programs, but improved industry education benefits the company in other ways. It has hired 275,000 full- and part-time employees in the U.S. since the start of the year. Ms. Carlson says that Amazon finds it must retrain some new hires after discovering their technical capabilities are lacking once they are in the door.

“When you spend as much time as we do hiring people and getting the right people on board, it’s kind of frustrating when you bring them on and you’re having to spend another year or more getting their skills up to speed,” she said. “We see it ourselves, so we put these programs into place. We hear it from our customers and our partners, and it’s the right thing to do.”

More sophisticated cloud skills might be crucial to Amazon’s business. The company’s cloud division has become one of its most important profit drivers. It posted $11.6 billion in sales in the quarter that ended Sept. 30, up 29% from a year earlier.

Cloud computing, which was hot before the pandemic, has become even more central to many companies as they speed up their adoption of such digital tools. Amazon’s cloud rivals, including Microsoft Corp. and Google-parent Alphabet Inc., also have seen strong growth in the sector as users embrace their services, as have companies such as Zoom Video Communications Inc. that provide cloud-based products to facilitate remote working and teaching.

Most of Amazon’s courses can be taken remotely, through Amazon itself or via partners that focus on helping people find new careers. Those organizations are located in places ranging from Newark, N.J., to Missoula, Mont., and internationally from Nigeria to Australia.

The content varies widely. One two-day program prepares students to work as entry-level fiber-optic fusion-splicing technicians, an in-demand field that involves testing and installing the delicate cables made up of minuscule glass tubes that power cloud data centers. Another course, called Cloud Practitioner Essentials, covers the basics of the AWS cloud, while other training focuses on more advanced skills, such as machine learning.

The push could help millions of workers navigate career changes without incurring steep debt at a time when many find themselves out of work and burdened by student loans.

A report commissioned by Amazon and conducted by researchers at consulting giant Accenture PLC found that 33 million Americans could double their income, earning a median salary of $35 an hour, by gaining new training in what the authors describe as “opportunity jobs,” or those in industries deemed at low risk of automation that are expected to grow. Many of those positions are in digital fields.

The obstacles to retraining or learning a new skill can be great. Many people in low-paying jobs or out of work might struggle with child care issues or a lack of time to undertake a new program, to say nothing of the “change fatigue” some feel in pursuing new career paths, said Kelly Monahan, a global talent lead researcher at Accenture. Still, digital skills are likely to be a career differentiator.

“The technical side of work is becoming so paramount,” she said.

Jarred Gaines started 2020 working as a fitness director and personal trainer at a Boston area gym. The 35-year-old planned to launch his own fitness facility later in the year, but the pandemic squashed those ambitions. In May, he started a 12-week course through a free, Amazon-supported
Mr. Gaines—who said he still is passionate about fitness but doesn’t miss the 5 a.m. client appointments—says a technical career hadn’t been on his radar. “My experience with tech was mostly upgrading my cellphone for the newest one Apple told me to,” he said.

He hopes to take on increasingly sophisticated cloud and engineering roles, and continues to take community college courses, but he acknowledges a career shift takes additional effort. “Be ready to grind,” he said.

Amazon says it is on track in its own efforts to retrain its workforce. The company trained 15,000 workers in the first year of its upskilling pledge announced in 2019 and says it would meet its goal of reaching 100,000 staffers by 2025. The most popular retraining program internally so far has been Amazon Career Choice, in which the company subsidizes community college courses, setting up employees to eventually leave Amazon. Most courses so far have focused on health care, transportation or technical jobs but can vary regionally, a spokeswoman said.

Write to Chip Cutter at chip.cutter@wsj.com

Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved.

Appeared in the December 11, 2020, print edition as 'Amazon Offers Plan to Train Millions to Work in the Cloud.'

Story Link

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen2/1/2021 9:34:42 PM
   of 1641
 
Amazon, Alphabet and Salesforce are all investing in a $28 billion company that crunches big data

PUBLISHED MON, FEB 1 20211:06 PM EST
UPDATED MON, FEB 1 20215:51 PM EST
Jordan Novet @JORDANNOVET
CNBC.com

KEY POINTS

-- Databricks’ software helps companies prepare data for analysis and artificial intelligence models.

-- Amazon has not historically done many late-stage start-up investments.

Databricks, a start-up whose software helps companies quickly process large sets of data and get it ready for analysis, said Monday it has raised $1 billion in fresh cash, including from a few prominent corporate investors.

Amazon Web Services, Alphabet’s CapitalG venture arm and Salesforce Ventures all joined in, according to a statement. Microsoft, which invested in Databricks earlier, is also participating in the new round, the statement said.

The transaction, which values Databricks at $28 billion, shows the top three U.S. cloud providers recognize that the company represents an opportunity similar to Snowflake, another firm with cloud software that helps companies manage data.

Databricks rose to prominence because it helped companies implement a version of Apache Spark, an alternative to the Hadoop technology for storing lots of different kinds of data in massive quantities. It can help clean up data for exploration in data visualization software such as Salesforce-owned Tableau. The Databricks software gives companies a simple way to run this sort of software, without having to worry about configuring and updating it. Databricks is also increasingly helping organizations deploy artificial intelligence models.

“We’re 100 percent cloud-native,” Databricks CEO Ali Ghodsi told CNBC in a 2019 interview. That same principle applies to Snowflake, which Salesforce had also invested in and has demonstrated strong revenue growth following its initial public offering last year.

Amazon, the largest cloud provider, did not put money into Snowflake before it went public. Now it’s investing in Databricks at a later stage than it has historically done.

Story Link

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen2/2/2021 8:00:35 PM
   of 1641
 
Google Cloud lost $5.61 billion on $13.06 billion revenue last year

PUBLISHED TUE, FEB 2 20214:05 PM EST
UPDATED TUE, FEB 2 20215:21 PM EST
Jennifer Elias @JENN_ELIAS
CNBC.com

KEY POINTS

-- It is the first time Google has revealed operating income for its cloud unit.

-- It comes one year after the company began breaking out revenue for the cloud business, which trails Microsoft and Amazon in market position.

-- Wall Street has sought additional financial details surrounding the company’s cloud investments, which have included hiring and acquisition sprees.

Google’s cloud business reported operating loss of $5.61 billion in 2020. It brought in $13.06 billion in revenue for the year. It’s the first time the company revealed the operating income metric for its cloud business.

The unit’s losses appear to be growing as the company invests heavily in sales staff. The company said the cloud unit lost $4.65 billion on $8.92 billion in revenue in 2019, and lost $4.35 billion on $5.84 billion in revenue in 2018.

It lost $1.24 billion on revenue of $3.83 billion in Q4 2020.

Alphabet’s latest push to show it’s serious about its cloud unit comes as it tries to diversify revenue, which primarily comes from advertising, a business that showed vulnerability in 2020 — particularly in the second quarter. Google Cloud includes infrastructure and data analytics platforms, collaboration tools such as Google Docs and Sheets, and “other services for enterprise customers.”

Wall Street has been seeking additional financial details surrounding the company’s cloud business, which Google has pumped resources to grow while in a distant third place to heavyweights Microsoft and Amazon. Google began breaking out cloud revenue for the first time, one year ago.

The company’s past attempts to bolster its cloud unit under CEO Diane Greene, who left in 2018, failed to capture much market share. But, since former Oracle executive Thomas Kurian came to Google to lead its cloud efforts in 2019, the company has gone on hiring and acquisition sprees.

Story Link

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen2/8/2021 7:35:12 AM
   of 1641
 
Follow the CAPEX: Cloud Table Stakes 2020 Retrospective

FEBRUARY 5, 2021 ~ CHARLES FITZGERALD

It is time for our annual bulletin to CAPEX obsessives (previous updates from 2016, 2017, 2018, 2019 as well as earlier installments). The hashtag for CAPEX 2020 is “#bonkers”.

The three hypercloud companies – Amazon, Google, and Microsoft – collectively spent almost $97 billion on CAPEX in 2020, up 32% from 2019’s $73.5 billion. Amazon and Microsoft’s spending again hit new highs (bonkers new highs in Amazon’s case) while Google’s spending declined for the second year in a row. Amazon spent more on CAPEX than any company (not just cloud companies) in 2020 [*].

Standard caveat: The reported numbers are the companies’ total CAPEX spend, not just cloud infrastructure, so includes land, office buildings, campus redevelopments, warehouses, panopticonvenience stores, manufacturing tooling, self-driving cars, delivery trucks, flying machines, satellite constellations, hardware that both is and is not required for quantum computing, and – the dream is admittedly on life support – a Google space elevator. The numbers include finance leases for both Amazon and Microsoft, as well as build-to-suit leases for Amazon (these are debt instruments used to finance specific CAPEX expenditures, namely servers and buildings).




Im 2020, Amazon’s CAPEX was up a mere 69% to just shy of an absolutely bonkers $54 billion, Google’s spend declined over 5% to $22.3 billion and Microsoft’s increased 14% to $20.6 billion. Amazon (company slogan: “a penny of free cash flow is a terrible thing to waste on abstract accounting constructs like profits”) is accelerating its spending and may soon achieve orbital escape velocity on the back of its CAPEX trajectory, rendering Jeff Bezos’ personal space program unnecessary.

It is beyond bonkers that “a company that sells books” spends more on CAPEX than any other company [*]. And the idea that our other two “software companies” have a CAPEX spend in the same league or beyond the biggest automakers, energy companies, semiconductor manufacturers, and telcos in the world is likewise bonkers.

[*] Where “any other company” is defined as the random set I looked up and charted above. Are there other big spenders I’ve missed? I didn’t look at Chinese companies, mostly because I’m lazy, but also because often their investments are non-economic, accounting dodgy, and audits optional, even for companies listed in the US (if only domestic companies had that luxury, but Wall Street still dreams of selling a collaterized debt obligation to each and every person in China).

The three hyperclouds’ cumulative CAPEX spend since 2000 is almost $444 billion, with over $170 billion of that occurring in the last two years. Efforts to find an amusing comparable failed.

As a percentage of revenue, hypercloud CAPEX is clustered between 12 and 14% of revenue, with Amazon edging out Microsoft for the top spot this year.

Azure and Google Cloud (which includes both Google Cloud Platform and whatever they are calling Google Docs this week) have both seen growth accelerate slightly since “ a bat out of Wuhan” became the new “butterfly effect”. The AWS growth rate is lower, but they grew revenue by over $10 billion in 2020, so their dollar growth alone is bigger than GCP.

Amazon #bonkers By most standards, Amazon had an absolutely magnificent pandemic year (some might say “bonkers”). Revenue growth nearly doubled to 38% from 20% growth the previous year, bringing in over $386 billion. Profits almost doubled. And they spent the aforementioned bonkers CAPEX to support that growth while providing an essential lifeline to hundreds of millions of people in isolation amidst a global pandemic. Not bad.

Amazon’s cash out of pocket for CAPEX exploded: 2020 spending was 238% of the previous year, with over $23 billion incremental spent on hard assets. Debt-funded CAPEX declined 8% (with capital leases — probably AWS servers — down 15% while the smaller sum of built-to-suit real estate leases increased as they grew their warehouse space by 50%). Over the past couple years, Amazon’s CAPEX outlay was split roughly half cash and half debt. In 2020, the cash spend ballooned to 75%. There clearly was a push to spend as much as possible on hard assets (“Quick, buy as many wide-body airplanes as you can before the end of the year!”).

Amazon has always managed for free cash flow, as opposed to profits. Despite the numbers above, the pandemic year was an all-time low when measured against their financial priorities. Even though they stuffed an additional $23 billion into CAPEX, the company was unable to reinvest most of its cash flow into the business and (embarrassingly) reported profits of over $21 billion. One could almost speculate that Bezos’ departure as CEO is not so much his choice, but reflects the board’s disappointment in his dismal 2020 cash flow performance.

At this point, AWS accounts for a small fraction of Amazon’s CAPEX spending, and examining overall Amazon CAPEX – except as an excuse to use the word bonkers– is ever less relevant to cloud infrastructure discussions. The finance leases historically have been the best tell on AWS cloud infrastructure investments, as they reflected server purchases.

If we look at AWS revenue vs. finance leases, we can see they have decoupled, particularly in the last two years. The company probably paid cash in 2020 to convert some of its pandemic windfall into servers, and has less need to use debt to realize its CAPEX ambitions.

Google Following Amazon last year, Google is now moving to depreciating their servers over four years:

As noted in our earnings press release, we have adjusted the estimated useful lives of servers and certain network equipment starting in 2021. We expect these changes will favorably impact our 2021 operating results by approximately $2.1 billion for assets as of year-end 2020.

That bold bean counter move has a multi-billion impact on their accounting earnings, but as seen by their CAPEX/revenue number, Google has been on a four-year server cycle for a long time. Yet more evidence that accounting can be distracting. CAPEX obsessives care about the gross spend on silicon, glass, steel and concrete, and don’t waste time netting out accounting abstractions like depreciation.

Based on that four-year cycle, we expect Google’s next CAPEX up-cycle in 2022. The company has highlighted they are investing heavily in Google Cloud, but we don’t see it in the CAPEX numbers. Even as they expand their footprint into more geographies, the infrastructure for search and YouTube are vastly larger. Google’s commentary on Google Cloud emphasizes most of that investment is in people.

Google (why does the Alphabet conceit still exist?) also stopped breaking out CAPEX for Other Bets this year, as Loon joined Makani and Titan in the aerial wing of the Killed By Google cemetery. Google continues to shed its Googley-ness and will fail to deliver a space elevator, calling into question their entire existence and social contract compliance. I await Congressional hearings on what went wrong with the space elevator.

Microsoft Microsoft’s CAPEX remains boring, monotonic and inscrutable, as I have not been able to tease apart their spending on Azure, Office 365 and Bing (which may be a distant number two to Google, but search still has huge infrastructure requirements to store all those copies of the entire Web). There was a brief moment of excitement right after the pandemic broke when Microsoft was short on servers due to supply chain disruptions, but that was remedied the next quarter and boringness restored. Hopefully they can do more CAPEX-wise this year to stand out in this annual commentary.

Story Link

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen3/9/2021 8:40:37 PM
   of 1641
 
Coursera registration statement: S-1 (sec.gov)

‘MOOCs Failed, Short Courses Won’

Education-technology company Coursera launched a bid to become a publicly traded company last week, giving industry experts a glimpse at its financial inner workings. The company is losing money, but it might be finding a way to monetize MOOCs.

By Lindsay McKenzie
March 9, 2021



STEVE JENNINGS/STRINGER/GETTY IMAGES
Andrew Ng, left, co-founder and chairman of the board of Coursera, leads the board of the company as it navigates its initial public offering.
--------------------------------

Online learning platform Coursera filed an application last week to become a publicly traded company and sell shares on the New York Stock Exchange under the ticker symbol COUR.

The initial public offering was long anticipated by industry analysts but is notable because few education-technology companies have taken the plunge. Most fail to reach the scale of companies such as Chegg and 2U, two publicly traded companies that announced their IPOs eight and seven years ago, respectively.

"The Coursera IPO has been the most anticipated capital event of the last few years among ed-tech prognosticators," said Daniel Pianko, managing director of University Ventures, an investment firm focused on global higher education. "There is no logical buyer of Coursera, so an IPO is the natural way for investors to achieve a return on their investment."

Due to healthy valuations and a network of around 77 million learners worldwide, leaders at Coursera are taking the company through this next step, although the number of shares to be offered -- and their price point -- is yet to be determined.

Coursera was founded almost a decade ago by Stanford University professors Andrew Ng and Daphne Koller with the proclaimed mission of bringing quality online education to the masses. In this moment of economic uncertainty, online providers such as Coursera are more important than ever before, said Ng, who chairs the company's board, in a letter filed with the SEC along with Coursera's IPO disclosures.

"We've seen billions struggle during the pandemic. At school, many learners and instructors were ill-prepared to move learning online. At work, digital acceleration is threatening many jobs as skills rapidly become obsolete," Ng said. "The staggering scale of disruption has underscored the need to modernize the global education system. Leaders tasked with creating a level playing field now recognize that learning online will be a powerful means of providing individuals with the skills they need and promoting social equity."

Although it is still possible to audit many Coursera courses for free, the company has evolved significantly since its early days as a provider of massive open online courses, or MOOCS. The platform’s combination of paid nondegree certificates, stackable degrees and professional credentials has forged a company with an estimated value of between $2.4 billion and $5 billion.

"Wall Street is desperately seeking high-growth, consumer-based businesses like what Coursera has become," Pianko said. "Massive eyeballs with a repeatable, freemium model drives the types of lofty valuations that the Coursera IPO achieves."

Documents filed with the SEC show Coursera posted revenue of $293.5 million in 2020 -- a growth rate of 59 percent over 2019. But the company did not turn a profit, reporting a net loss of $67 million in 2020. Coursera lost more money in 2020 than it did in 2019, when it lost $46.7 million. The company's accumulated deficit since inception stood at $343.6 million at the end of 2020. Its IPO filing indicated it expects to incur losses for the foreseeable future.

Many online learning platforms and providers saw usage surge during the pandemic, said Trace Urdan, managing director at investment bank and higher education consulting firm Tyton Partners. Whether those high levels of engagement can be sustained will be a key question for investors considering Coursera shares, he said.

The company's growth over the past decade has been impressive but was not without several pivots in business strategy, Urdan said.

In recent years, Coursera has branched into online program management services for universities, helping institutions to launch and manage online degrees for a share of tuition revenue. That revenue stream was thought by some analysts to be an important part of Coursera's financial future, but the IPO documents reveal that this share of the business is quite a bit smaller than some observers imagined.

Coursera splits its revenue into three different categories, IPO documents show. A consumer segment covers payments and subscriptions made directly by learners to Coursera. An enterprise segment covers businesses and government customers training their employees, as well as university customers providing online courses to students. A degrees segment works with universities to provide fully online bachelor's and master's degrees.

The consumer segment's revenue stream is by far the most significant, although revenues through business partnerships and shares of degree tuition are growing year over year. In 2020, consumer revenues were $192.9 million, enterprise revenues were $70.8 million and degree revenues were $29.8 million.

“I thought they were going to end up being an online program management company. I thought that was where the real business was,” said Urdan. “I was surprised to see how strong their consumer business is and how well they’ve grown their corporate business, especially in a pandemic.”

The company's OPM business is special because Coursera has a "gigantic database" of users, Urdan said.

"They know not only who is interested in degree programs, but also some information about their competence," he said.

Coursera spends the equivalent of 36.5 percent of its revenue on sales and marketing -- $107.2 million in 2020, according to the new IPO documents. That surprised some industry experts, because the company has not been shy about touting how cheaply it is able to market degrees to its pool of learners. Coursera's marketing spend nearly doubled between 2019 and 2020.

"One key driver of Coursera’s value proposition is that they have 77 million learners who they can convert into paying subscribers or degree seekers," Pianko said. "However, the 37 percent of revenue spent on marketing indicates that Coursera may not have the capacity to acquire students at significantly lower price points than traditional education marketers despite all those MOOC students."

The $107 million marketing spend by Coursera in 2020 is not unprecedented when you consider the company's scale and growth, said Sean Gallagher, founder and executive director of Northeastern University’s Center for the Future of Higher Education and Talent Strategy and executive professor of educational policy.

“Many companies including those in online education spend 50 percent or more of their revenue on sales and marketing when they’re scaling up and generating losses, and the share declines over time,” Gallagher said.

Like any consumer-facing business, Coursera needs to spend on marketing in order to build a brand and attract customers, Gallagher said.

"Based on the financial details in the filing, it looks like there are some real efficiencies in their ability to acquire customers,” he said.

Coursera reported that its average acquisition cost per student for online degree programs was under $2,000 for the two years ending in December. That figure is impressively low, said Gallagher. Most universities spend many thousands of dollars enrolling online degree students, he said.

"It is clear that their reach and platform is efficient at creating a pipeline of certificate and degree students," Gallagher said.

The relatively small size of Coursera’s OPM business was not surprising to Gallagher.

“Beyond a few pilot partners like the University of Illinois, it was really just 2018 and 2019 when they started to scale that business up -- so it’s impressive that it's been doubling each year,” he said.

Though users can sample many courses on Coursera without paying, the sheer volume of students paying relatively small fees for certifications has played a huge role in the financial success of the company. A two-hour guided project on how to build a website, for example, costs just $9.99.

“It’s clear that monetizing credentials -- certificates and degrees -- are what have made the MOOC platform business model sustainable,” Gallagher said.

Some industry experts, including Gallagher, feel the MOOC business model has simply evolved over time to a model that includes paid options. Others feel the model has been replaced by something quite different.

Coursera's CEO, Jeff Maggioncalda, spent the last several years moving the company away from its MOOC heritage, Pianko said. The company was moving toward a model where consumers pay for stackable credentials.

“The headline is that the MOOC failed, but the short course won,” Pianko said.

Story Link

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen3/13/2021 6:06:25 AM
   of 1641
 
How Amazon’s S3 jumpstarted the cloud revolution

Amazon's first real web service brought us everything from Pinterest to coronavirus vaccines. Fifteen years later, insiders tell Protocol how it grew to store 100 trillion objects.

Tom Krazit
Protocol
March 12, 2021

In late 2005, Don Alvarez was just another software entrepreneur struggling to get a new business off the ground when a friend working at Amazon invited him to check out a secret project that would change the world.

Alvarez's startup, FilmmakerLive, was designing online collaboration applications for creative professionals and faced a common problem for that time: storage. Tech startups were just starting to claw their way back from the excesses of the dot-com era, and buying expensive hardware was a risky bet for a startup. Buy too little and your site crashes. Buy too much and you go broke. For the chaotic life of a startup, that was a risky bet.

He was skeptical about what he could learn about movie collaboration from an ecommerce company, but took the friend up on his offer.

" Rudy Valdez blew my mind," Alvarez told Protocol. Valdez was then the head of business development for AWS, which at that time offered only a handful of basic services. He gave Alvarez, now director of engineering for Mural, a taste of Amazon's first and arguably most fundamental product: S3, a cloud-based object storage service.

S3, or Simple Storage Service, made its debut 15 years ago this weekend. It would be years before "the cloud" became one of the most disruptive forces in the history of enterprise computing. Amazon didn't even use the term when it announced S3 on March 14, 2006. But the storage service's launch instantly solved some very tricky problems for entrepreneurs like Alvarez, and would come to change the way all businesses thought about buying information technology.

Startups like Pinterest, Airbnb and and Stripe flocked to AWS in the coming years, and older companies like Netflix — then a DVD-mailing operation — also took the plunge to retool their operations for the internet.

"Amazon was putting infinite disk space in the hands of every startup at an incredibly low and pay-for-what-you-need price point, there was nothing like that," Alvarez said. "The second piece was that their API was so simple that i could just pick it up and build something useful in it, in the first 24 hours of using an unreleased, unannounced product."

S3 is now a vital cog in the AWS machine, which generated more than $45 billion in revenue last year. It has evolved in many different directions over the last 15 years, yet has kept a set of design principles drawn up by a team led by Allan Vermeulen, Amazon's chief technical officer during the earliest days of AWS, at the heart of its strategy.

"We knew what [customers] wanted to do then," Mai-Lan Tomsen Bukovec, vice president for AWS Storage and the current head of S3, told Protocol. "But we also knew that applications would evolve, because our customers are incredibly innovative, and what they're doing out there in all the different industries is going to change every year."



Mai-Lan Tomsen Bukovec runs Amazon S3 and AWS Storage.Photo: Amazon Web Services
Building for flexibility
---------------------
"When people think bigger and faster in computers, they think of this," said Vermeulen during an interview in 2014, drawing a line in the air up and to the right. But storage technology has evolved differently, he said, over a period of long plateaus followed by sharp increases in capabilities: "It's the difference between driving my Tesla and flying my airplane."

S3 was one of those sharp breaks from the status quo. It was a godsend for developers like Alvarez, who no longer had to worry about buying and maintaining pricey storage hardware just to do business.

"There was nothing that we had access to that provided anything remotely like what S3 could do," Alvarez said. "I felt like somebody had just given me the keys to the candy store."

Like much of AWS, S3 was born from Amazon's experience building and scaling Amazon.com, which taught it a lot of hard lessons about the limits and possibilities of distributed computing.

"A forcing function for the design was that a single Amazon S3 distributed system must support the needs of both internal Amazon applications and external developers of any application. This means that it must be fast and reliable enough to run Amazon.com's websites, while flexible enough that any developer can use it for any data storage need," AWS said in the original launch press release for S3 in 2006.

In the early days of the cloud, performance and reliability were a huge concern. And those concerns were especially fraught when it came to data, which even 15 years ago was understood to be one of the most important assets in a company's arsenal.

"When we launched S3 15 years ago, S3 had eight microservices, and we have well over 300 now." Tomsen Bukovec said, referring to the then-novel software development practice of breaking up large chunks of interdependent code into smaller, independent services.

Building around microservices allowed AWS to decentralize points of failure for S3 while also creating a system designed to acknowledge that distributed cloud services will fail on occasion, and that such failures shouldn't take the entire system down.

It also allowed the company to layer on future enhancements without having to disturb the core pieces of the system: AWS now claims that S3 offers "11 9s" of reliability, or an astonishing 99.999999999% uptime that exceeds self-managed storage equipment by a large margin. ( Other cloud storage vendors have matched this standard.)

S3 began life as a holding pen for simple web elements like images and video that website operators would pull down from AWS to your browser when you loaded a page. Over time, as companies became more comfortable with cloud storage, they started putting all kinds of data in S3.

And that's when things started to get a little messy.



Amazon Web Services's booth at the Microsoft PDC event in Los Angeles in 2008.Photo: D. Begley/Flickr
Plugging leaky buckets
------------------------
If you look back at any number of security incidents over the past several years, a large number of them can be attributed to " leaky buckets," referring to the core unit of S3 storage. These incidents happen to other cloud providers as well, but given AWS's market share it's a problem the company has had to deal with on many, many occasions.

AWS operates under a " shared responsibility" model for security: AWS will prevent anyone from physically accessing its servers or infiltrating its network, but customers are expected to protect their accounts to a reasonable extent. In other words, you can't blame the rental car company if someone steals your laptop from the back seat of an unlocked vehicle.

Yet time and time again, cloud customers have left sensitive data belonging to their own customers in unprotected storage buckets open to anyone who can find them, which is easier than you might think. It's just one example of how AWS has had to evolve some of its core products to meet customers where they are, especially later-arriving customers accustomed to accessing everything they need from private, internal networks.

"In a business application world, you don't need to have access outside the company, or really outside a group of users within the business," Tomsen Bukovec said. But it was clear that AWS needed to do more to help its customers help themselves, which led to the development of tools like Block Public Access that could lock down all storage buckets associated with a corporate account.

It was also clear to outsiders in the fast-growth early days of AWS that Amazon's famous " two-pizza teams" were "both a strength and a weakness," Alvarez said.

"It enabled every one of those services to rocket forward at a speed none of those competitors could match. And in the early days, it meant there was a lot less consistency [and] that was hard to puzzle through and manage," he said, noting that the experience has improved over time.

Additional security tools have followed that let customers scan their accounts for unauthorized access from the public internet, or assign different levels of access to people with different roles within a company.

"Where we're seeing customers go with their migrations is that they often have hundreds of buckets and lots and lots of [different] roles," Tomsen Bukovec said of the newcomers to the cloud who seem most prone to these mistakes. "When we think about what to build to help customers secure the perimeter of their AWS resource, we think about how they would like to audit and how they would like to control" access to their storage resources inside S3.



Moderna used AWS in the COVID-19 vaccine's development.Photo: U.S. Navy
--------------------------------------
Getting to 100 trillion

S3 continued to evolve in the years following its debut, and it also got a lot cheaper: By the time AWS got around to having its first major re:Invent developer conference in 2012, one of the major announcements from that week was a 24% to 28% percent reduction in S3 storage prices, the 24th such price cut the company had made up to that point.
Those price cuts were possible because AWS was able to upgrade the underlying S3 service on the fly, as Alyssa Henry, then vice president of AWS Storage Services, explained during a keynote address in 2012.

S3 was originally designed to hold 20 billion objects in storage, but it grew more quickly than anyone had anticipated, hitting 9 billion objects within the first year. The company upgraded the underlying storage service with more capacity in mind without any disruption to the original S3 customers, and By 2012 it had scaled to 1 trillion objects in storage, and by 2020, 100 trillion.

"What's really cool about this is customers didn't have to do anything: You didn't have to go out buy the next upgrade — v2 of Amazon S3; you didn't have to do the migration yourself; you just got it all for free, it just worked, things just got better," Henry, who is now executive vice president and head of Square's Seller unit, said at the 2012 event. "That's one of the differences with the cloud versus how traditional IT has been done."

A similar upgrade rolled out just last year, when AWS introduced strong consistency across S3.

Consistency is a data-storage concept that can rattle your brain a bit the first time it shows up; older storage systems such as the original S3 were designed around "eventual consistency," meaning that a storage service wouldn't always be able to tell you right away if a new piece of data had settled into its designated storage bucket, but it would catch up before long.

Now that modern applications move much faster, however, anything that makes a query to a storage service really needs to know the exact, current list of available data to perform at the expected level. So over the last couple of years, AWS rebuilt S3 around strong consistency principles, which other cloud providers offer but were able to roll out against a much smaller user base.

"That is a very complicated engineering problem," Tomsen Bukovec said, and it was one of the stand-out announcements from the re:Invent 2020 among the geekier set of AWS users.

As they head into a new decade, Tomsen Bukovec and her team are looking at ways to make it easier to do machine learning on top of S3 data, and to improve the performance and capabilities of data lakes that allow for fine-grained analysis of internal and customer data among AWS users.

In fact, the Moderna vaccine for COVID-19 was developed with the help of a S3 data lake, Tomsen Bukovec said.

"We have this unique view that we built up over 15 years of usage, where we can determine what our customers are trying to do, and how we can build [S3] in such a way that it keeps true to that simple, cost-effective, secure, durable, reliable and highly-performant storage," she said.

Story Link

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen3/19/2021 5:03:05 PM
   of 1641
 
Microsoft Toughens Up Its Cloud

Tom Krazit and Joe Williams
Protocol
March 18, 2021

THE BIG STORY: Microsoft's embrace of the cloud will go down as one of the most successful strategic shifts in software history, but it takes a long time and a lot of money to get the battleship pointed in the right direction.

That's why Wednesday's announcement that Microsoft will make availability zones standard for new cloud regions around the world is significant, and not just because the company suffered a widespread hours-long outage on Monday. It's a delayed recognition of modern cloud architecture by a company that built its first data center in 1989.

Here's why availability zones are important, and what you need to know about them.

  • All cloud providers offer their services out of "regions" spread throughout the world. Microsoft actually offers the largest number of distinct regions across the Big Three.
  • Availability zones are data center buildings, or collections of buildings, that are spaced out within a given region. Here's how AWS defines it: "AZs are physically separated by a meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles) of each other."
  • Each availability zone has separate networking and electrical power facilities, with the goal of preventing something like the OVH incident a few weeks ago, where a fire caused by an electrical issue in one data center actually took out four data centers.
  • Cloud customers have to design their applications to take advantage of multiple availability zones, but once they do, they significantly increase the resiliency of those apps.

  • Microsoft has been much slower than its rivals to roll out availability zones.
    That has contributed to the market's perception (backed up by actual incidents) that its cloud is more brittle than the ones operated by its rivals.

  • Microsoft introduced its first availability zones within two cloud regions (Iowa and Paris) three years ago. They've been a standard part of AWS since 2008. Google was also early to offer zones.
  • This week, after rolling out availability zones within 11 of its more than 60 cloud regions since 2018, Microsoft said that every country in which it operates will have at least one region that offers availability zones by the end of 2021.
  • Going forward, all new Microsoft cloud regions will also have availability zones at launch, which should help bridge the cloud-services digital divide. Most of its current regions with availability zones are in rich countries.

  • Availability zones can't prevent all outages.
    Software configuration errors like the one behind Monday's Teams outage are a far more frequent culprit. But zones have been table stakes for big clouds for a long time, and rolling them out around the world will make Azure more competitive.

  • It's also a reminder of how flush Microsoft is right now, on pace to do $66.8 billion in commercial cloud revenue during its current fiscal year.
  • Microsoft spent $5.4 billion on capital expenditures last quarter, and CFO Amy Hood expects that number to increase during the current quarter. Zones are expensive, but that should easily cover the costs.
  • There's an irony here. All these new facilities should arrive just in time to protect cloud customers from a new period of intense, damaging weather caused by climate change linked to human activity, such as data centers.

    Story Link

    Share RecommendKeepReplyMark as Last Read


    From: Glenn Petersen3/19/2021 8:05:15 PM
       of 1641
     

    Share RecommendKeepReplyMark as Last Read


    From: Glenn Petersen3/22/2021 1:09:33 PM
       of 1641
     
    Exclusive: Box explores sale amid pressure from Starboard - sources

    By Greg Roumeliotis, Joshua Franklin
    March 22, 2021
    2 MIN READ

    (Reuters) - U.S. cloud services provide Box Inc is exploring a sale amid pressure from hedge fund Starboard Value LP over its stock performance, according to people familiar with the matter.

    Redwood City, California-based Box has discussed a potential deal with interested buyers, including other companies and private equity firms, the sources said, cautioning that no sale of the company is certain.

    The sources requested anonymity because the matter is confidential. Box declined to comment.

    Reuters reported last month that Starboard was preparing to launch a board challenge against Box unless it took steps to boost value for shareholders. It has privately expressed disappointment that the company has failed to capitalize on the work-from-home trend during the COVID-19 pandemic, as many of its cloud computing peers have done.

    Its shares currently trade at around $22 apiece, only marginally higher than the price at which it debuted on the New York Stock Exchange in January 2016.

    Box said last week it would extend the deadline to nominate directors to its board from mid-April to May 11.

    Founded in 2005, Box’s offering includes file sharing, cloud storage and cloud backup. Demand for secure file-sharing and other workplace collaboration services has risen since the onset of COVID-19, driven by the information technology needs of companies whose employees are working from home.

    While Box has benefited from this trend, it has struggled to fully capitalize on it, as some of its services and products are offered by competitors either for free or at a lower cost.

    Box earlier this month reported fourth-quarter earnings that beat analyst expectations. The company has a market capitalization of around $3.6 billion.

    Reporting by Greg Roumeliotis in New York and Joshua Franklin in Boston; Editing by Nick Zieminski

    Story Link

    Share RecommendKeepReplyMark as Last Read


    From: Glenn Petersen3/28/2021 10:50:17 AM
       of 1641
     
    Industry clouds could be the next big thing

    Kas Shaikh
    VentureBeat
    March 28, 2021

    Despite predictions of a cloud shift accelerated by the pandemic and Gartner projecting a $651 billion public cloud market in 2024, organizations have barely scratched the surface of public cloud adoption. So it might seem odd at this stage to ask, “What’s the next big thing in public clouds?”

    The war between traditional on-premises data center infrastructure providers such as Dell, HPE, and Cisco and the public cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud is far from over. However, one opportunity worth examining is industry clouds.

    Industry clouds are collections of cloud services, tools, and applications optimized for the most important use cases in a specific industry. APIs, common data models and workflows, and other components are available to customize capabilities. Industry cloud solutions from major public cloud providers also typically offer a variety of software and services, including industry-specific applications, from partners. For example, Microsoft and SAP partner to deliver SAP supply chain solutions through Azure’s manufacturing industry cloud.

    Industry clouds are of interest because of their potential to create value for both customers and public cloud providers. Established companies in industries feeling the sting of competition from cloud-native disrupters are especially good prospects for these types of solutions. For these companies, moving their core business applications to general-purpose public clouds can be challenging because they often rely on homegrown legacy applications or industry-specific software designed for on-premise data centers. These companies face a difficult choice. Simply “lifting and shifting” applications to the cloud could result in sub-optimal performance. Yet rewriting or optimizing them for the cloud would be time consuming and costly. Industry clouds have the potential to accelerate and take the risk out of their cloud migrations.

    An essential component of an industry cloud is that it must address the specific requirements of the industry it is designed to serve. For example, healthcare providers place a high priority on improving the patient experience but also require high levels of security, data protection, and privacy. These are necessary to demonstrate compliance with Health Insurance Portability and Accountability Act (HIPAA) regulations. Financial services companies value data analytics and AI for customer insights and new product development, and trading applications require latency measured in fractions of a second. Like healthcare, the financial services industry is a highly regulated industry. Specific characteristics of the retail industry include the need to continually collect and analyze large sets of data to improve inventory management.

    For some of these requirements — and especially when there are several in combination — general-purpose cloud solutions might not be enough. And given this has been the focus of most cloud migrations thus far, many traditional companies in highly competitive industries have fallen behind in the race to the cloud. This means they are not realizing anywhere near the value they could from adding public clouds to their IT infrastructures.

    In addition to public cloud, there are many industry specific SaaS options and new ones emerging. For example, in the healthcare industry, there are electronic health record (EHR) SaaS options available. Healthcare SaaS offerings include critical functions such as billing and supply chain. Another example is the pharmaceutical and life sciences vertical. Pharmaceutical SaaS offerings support clinical, medical, and compliance functions. The important point to highlight is that SaaS has been and continues to be the top cloud migration choice with a projected market spend for 2021 of $117.7 billion according to Gartner. SaaS is an excellent choice for supporting industry specific needs, and the big hyperscalers have taken notice.

    Given the opportunity in industry clouds, it’s not surprising that Amazon, Microsoft, Google, and IBM now all offer a broad range of industry-specific cloud solutions. For long-established companies such as IBM and Microsoft, this development mirrors that of their computer and software businesses, which evolved from providing customers with technology to solve their business problems. Both IBM and Microsoft have long histories of vertical market experience and large vertical market customer bases they can leverage to build and support industry clouds. This gives them an advantage with some customers. But for all public cloud providers, industry clouds are a logical next step in the ongoing maturation of public clouds.

    The industry cloud opportunity has also attracted the attention of cloud service providers that offer support for migrating industry-specific applications to public clouds. Before the cloud, cottage industries sprung up to help customers of industry-standard applications deploy and maintain them. Some of these companies have evolved their businesses to support cloud migrations of these applications.

    For example, in healthcare, where Epic and Cerner dominate the U.S. hospital EMR market, with 29% and 26% of the market, respectively, numerous firms exist to help companies bring these applications to the cloud. Given regulations and the business imperative to protect data in EMRs, most of these companies support a hybrid cloud approach, a solution that combines on-site data centers with public clouds. They also provide solutions and special expertise in privacy, security, and disaster recovery. While some host the applications on private clouds, many form partnerships with one or more public cloud providers.

    At the same time, Epic and Cerner are establishing their own relationships with public cloud providers. It’s worth noting that a failed relationship between Epic and Google offers an object lesson for cloud providers seeking to make their mark in specific industries. Epic severed ties with Google after it came to light that in its work with Ascension, a large Missouri-based health system, Google employees gained access to patient information without consent when information was being transferred from on-site servers to Google servers. Even though Ascension has continued to work with Google, Epic shifted its focus to Microsoft. Cerner, too, moved away from Google in favor of Amazon.

    It’s still early days for industry clouds, and no doubt some are more marketing strategies than offerings tailored for specific industries in meaningful ways. That will change. However, in the meantime, companies evaluating industry clouds from public cloud providers should do so carefully, taking care to compare not just the industry cloud offerings from different providers but also the industry cloud of each provider to their general-purpose solution. There might not yet be that much of a difference.

    Kash Shaikh is CEO and President of Virtana, a cloud platform with AI-powered observability for migrating, optimizing, and monitoring cloud applications.

    Story Link

    Share RecommendKeepReplyMark as Last Read
    Previous 10 Next 10