SI
SI
discoversearch

   Technology StocksCloud computing (On-demand software and services)


Previous 10 Next 10 
To: Sam who wrote (1384)6/19/2017 11:20:13 PM
From: Sam
1 Recommendation   of 1405
 
Migrating to Microsoft's cloud: What they won't tell you, what you need to know
Of devils and details
19 Jun 2017 at 09:04, Sonia Cuff
theregister.co.uk

“Move it all to Microsoft’s cloud,” they said. “It’ll be fine,” they said. You’ve done your research and the monthly operational cost has been approved. There’s a glimmer of hope that you’ll be able to hit the power button to turn some ageing servers off, permanently. All that stands in the way is a migration project. And that is where the fun starts.

Consultants will admit that their first cloud migration was scary. If they don’t, they’re lying. This is production data we’re talking about, with a limited time window to have your systems down. Do a few migrations and you learn a few tricks. Work in the SMB market and you learn all the tricks, as they don’t always have their IT environments up to scratch to start with. Some of these traps are more applicable to a SaaS migration, particularly to Office 365. Some will trip you up no matter what cloud flavour you’ve chosen.

How much data? The worst thing you can do is take your entire collection of mailboxes and everything from your file servers and suck it all up to the cloud. Even in small organisations that can be over 250GB of data. If your cloud of choice doesn’t have an option to seed your data via disk, that all has to go up via your internet connection. At best, we’re talking days. Remember a disk seed isn’t always viable if you’re not located in a major city close to your cloud’s data centre. If it has to go via courier and then a plane, any data on a portable disk better be encrypted and again, you’re talking days for transport time. How do you put a lock on your production files in the meantime, assuming you’ll have no way to sync changes (more applicable to files than mailboxes).

Your two best options (pick one or both) are a pre-cloud migration archiving project and/or a migration tool that will perform a delta sync between the cloud and your original data source. Get ruthless with the business about what will be available in the cloud and what will stay in long-term storage on-prem. You seriously don’t want to suck up the last 15 years of data in this migration project. Once the current, live stuff is in the cloud by all means run a separate project to upload the rest of your older historical data if you wish. Email migrations seem to handle this the best, with tools like SkyKick and BitTitan MigrationWiz throttling the data upload over time, performing delta syncs every 24 hours and even running final syncs after you’ve flipped your MX records to the cloud. No email left behind!

Piece of string internet connection

Don’t even start a cloud project until you’re happy with your internet speeds. And don’t ignore your lesser upload speed either. That’s the one doing all the hard work to get your data to the cloud in the first place and on an ongoing basis if you are syncing all the things, all the time. Another tip: don’t sync all the things everywhere all the time. If you’re going to use the cloud, use the cloud, not a local version of it. Contrary to popular belief, working locally does not reduce the impact on your internet connection, it amplifies it with all the devices syncing your changes.

Outlook item limits

Office 365 has inherited some Microsoft Exchange and Outlook quirks that you might hope are magically fixed by the cloud. Most noticeable is performance issues with a large number of items or folders in a mailbox. This includes shared mailboxes you might be opening in addition to your own mailfile. Add up the number of folders across all of your shared mailboxes and you may have issues with searching or syncing changes if you are caching those mailboxes locally. We’ve seen Microsoft’s suggestion to turn off caching (i.e. work with a live connection to the cloud mailbox via Outlook) cause Outlook to run even slower and users to run out of patience.

The answer? You’re really left with just the option of a pre-cloud migration tidy-up. Local archiving is fairly easy to implement to shrink the mailbox, then online archiving policies take care of things once you are working in the cloud. If you don’t want the cost of an Office 365 E3 licence just to get archiving, look at adding an Exchange Online Archiving plan to the mailboxes that need it. This can include any shared mailboxes, but they’ll need to also be allocated their own Exchange Online plan licence to enable archiving to be added too.

DNS updates and TTL

When you are ready to flip your MX records to your new cloud email system, it’s going to take time for the updated entry to filter out worldwide across the global network of secondary DNS servers. Usually things will settle down after 24 hours, which is fine if your organisation doesn’t work weekends but challenging if you are a 24x7 operation. Some time before cut-over date, check the Time To Live (TTL) setting on your current MX record and bump it down to 3,600 seconds. Older systems can be set to 24 hours, meaning that’s how long someone else’s system will go with your old record before checking to see if it’s changed. Setting your TTL to 3,600 is a nice balance between update frequently versus don’t query the authoritative server every five minutes.

Missing Outlook Stuff

Lurking in the shadows of a Microsoft Outlook user profile are those little personal touches that are not migrated when a mailfile is sent to the cloud. These are the things you’ll get the helpdesk calls about. The suggested list of email addresses (Autocomplete), any text block templates (Quick Parts) and even email signatures all need to be present when accessing the user’s new email account. Depending on your version of Outlook, do some research to find out where these live and how to migrate them too or use a migration tool that includes an Outlook profile migration.

One admin to rule them all If I had a dollar for every time someone locked themselves out of their admin account and the password recovery steps didn’t work, I wouldn’t need to be writing this. Often your cloud provider can help, once you’ve run the gauntlet of their helpdesk. Save yourself the heartache by allocating more than one administrator or setting up a trusted partner with delegated administration rights. Office 365 does this very well, so your local helpful Microsoft Partner can unlock you with their admin access.

Syncing ALL the accounts

Even if your local on-prem directory is squeaky clean (with no users who actually left in 2012), it will contain an amount of service accounts. The worst thing you can do is sync all the directory objects to your cloud directory service, which then becomes a crowded mess. Take the time to prepare your Microsoft Active Directory first. Then use filtering options for Azure AD Connect to control what accounts you are syncing with the Cloud.

Compatibility with existing tech

Older apps don’t support TLS encryption that is required by Office 365 for sending email. This can impact software and hardware, such as scanners or multifunction devices. On the other hand, newer scanners can support saving directly to the cloud – Epson devices will back up to OneDrive, but not OneDrive for Business.

Ancient systems

You thought the migration went smoothly, but now someone’s office scanner won’t email scans or a line of business application won’t send its things via email. Chances are those ancient systems don’t support TLS encryption. Now things are going to get a little complicated. There are direct send and relay methods, but it might easier to buy a new scanner.

Metadata preservation

This one’s for the Sharepoint fans. True data nerds love the value in metadata – all the information about a document’s creation, modification history, versions etc. A simple file copy to the cloud is not guaranteed to preserve that additional data or import in into the right places in your cloud system. Learn that before you’re hit with a compliance issue or discovery request. Avoid the problem by investing in a decent document migration tool in the first place, like Sharegate.

Long file names

Once upon a time we had an 8.3 character short file name and we lived with it. Granted, we created much fewer files back then. With the arrival of NTFS we were allowed a glorious 260 characters in a full file path and we use it as much as we can today. Why? Because search sucks and a structure with detailed file names is our only hope of ever finding things again on-prem. Long file names (including long-named and deeply nested folders) will cause you grief with most cloud data migrations.

If you don’t run into migration issues with this, just wait until you start syncing. We’ve seen it both with OneDrive and Google Drive and on Macs too. Re-educate your users and come up with a new, shorter naming standard. And watch out for Microsoft lifting the 260-character limitation in Windows 10 version 1607. Fortunately, it’s opt-in.

Of course, I’ve omitted the need to analyse who needs access to what and ensuring you mimic this in the cloud, because it feels like a given. That is until someone calls to say they can’t see the emails sent to sales@ or access a particular set of documents. There are probably other migration gotchas that have bitten you and you’ll know to avoid next time. What else would be on your list? This kind of discussion among ourselves is more valuable than any vendor migration whitepaper you’ll ever read. ®


Share RecommendKeepReplyMark as Last Read


From: FUBHO6/29/2017 11:41:30 AM
   of 1405
 
Wal-Mart Prods Partners, Vendors to Leave AWS for Azure

BY ALDRIN BROWN ON JUNE 28, 2017

One provider of big data management services said it opted to host applications on Microsoft Azure instead of AWS, expressly to win business from a tech firm with a Wal-Mart account. Read More

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: FUBHO who wrote (1386)6/29/2017 11:43:12 AM
From: FUBHO
1 Recommendation   of 1405
 
Alibaba’s Increasing Cloud Data Center Footprint

BY CHRISTINE HALL ON JUNE 28, 2017

Latest addition is a 50,000-square foot GDS facility in China Read More

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen7/2/2017 3:42:31 PM
   of 1405
 
Dropbox Is Getting Ready for the Biggest Tech IPO Since Snapchat\

Reuters
12:07 PM ET

Data-sharing business Dropbox Inc is seeking to hire underwriters for an initial public offering that could come later this year, which would make it the biggest U.S. technology company to go public since Snap Inc, people familiar with the matter said on Friday.

The IPO will be a key test of Dropbox 's worth after it was valued at almost $10 billion in a private fundraising round in 2014.

Dropbox will begin interviewing investment banks in the coming weeks, the sources said, asking not to be named because the deliberations are private.

Dropbox declined to comment.

Several big U.S. technology companies such as Uber Technologies Inc and Airbnb Inc have resisted going public in recent months, concerned that stock market investors, who focus more on profitability than do private investors, would assign lower valuations to them.

Snap, owner of the popular messaging app Snapchat, was forced to lower its IPO valuation expectations earlier this year amid investor concern over its unproven business model. Its shares have since lingered just above the IPO price, with investors troubled by widening losses and missed analyst estimates. It has a market capitalization of $21 billion.

Still, for many private companies, there is increasing pressure to go pubic as investors look to cash out.

Proceeds from technology IPOs slumped to $6.7 billion in 2015 from $34 billion in 2014, and shrunk further to $2.9 billion in 2016, according to Thomson Reuters data.

Dropbox 's main competitor, Box Inc, was valued at roughly $1.67 billion in its IPO in 2015, less than the $2.4 billion it had been valued at in previous private fundraising rounds.

San Francisco-based Dropbox , which was founded in 2007 by Massachusetts Institute of Technology graduates Drew Houston and Arash Ferdowsi, counts Sequoia Capital, T. Rowe Price and Greylock Partners as investors.

Dropbox started as a free service for consumers to share and store photos, music and other large files. That business became commoditized though, as Alphabet Inc's Google, Microsoft Corp and Amazon.com Inc started offering storage for free.

Dropbox has since pivoted to focus on winning business clients, and Houston, the company's CEO, has said that Dropbox is on track to generate more than $1 billion in revenue this year.

The company has expanded its Dropbox Business that requires companies to pay a fee based on the number of employees who use it. The service in January began offering Smart Sync, which allows users to see and access all of their files, whether stored in the cloud or on a local hard drive, from their desktop.

fortune.com

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/17/2017 5:03:21 PM
1 Recommendation   of 1405
 
One Click and Voilà, Your Entire Data Center is Encrypted

BY WYLIE WONG ON JULY 17, 2017

IBM says its new encryption engine will allow users to encrypt all data in their databases, applications, and cloud services with no performance hit. Read More

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: FUBHO who wrote (1389)7/17/2017 5:03:55 PM
From: FUBHO
   of 1405
 
Report: VMware, AWS Mulling Joint Data Center Software Product

BY YEVGENIY SVERDLIK ON JULY 17, 2017

No details yet, but hybrid cloud product most likely Read More

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen7/18/2017 7:38:56 PM
   of 1405
 
Google’s Quantum Computing Push Opens New Front in Cloud Battle

By Mark Bergen
@mhbergen More stories by Mark Bergen
Bloomberg Technology
July 17, 2017

-- Company offers early access to its machines over the internet

-- IBM began quantum computing cloud service earlier this year

For years, Google has poured time and money into one of the most ambitious dreams of modern technology: building a working quantum computer. Now the company is thinking of ways to turn the project into a business.

Alphabet Inc.’s Google has offered science labs and artificial intelligence researchers early access to its quantum machines over the internet in recent months. The goal is to spur development of tools and applications for the technology, and ultimately turn it into a faster, more powerful cloud-computing service, according to people pitched on the plan.

A Google presentation slide, obtained by Bloomberg News, details the company’s quantum hardware, including a new lab it calls an "Embryonic quantum data center." Another slide on the software displays information about ProjectQ, an open-source effort to get developers to write code for quantum computers.

"They’re pretty open that they’re building quantum hardware and they would, at some point in the future, make it a cloud service," said Peter McMahon, a quantum computing researcher at Stanford University.

These systems push the boundaries of how atoms and other tiny particles work to solve problems that traditional computers can’t handle. The technology is still emerging from a long research phase, and its capabilities are hotly debated. Still, Google’s nascent efforts to commercialize it, and similar steps by International Business Machines Corp., are opening a new phase of competition in the fast-growing cloud market.

Jonathan DuBois, a scientist at Lawrence Livermore National Laboratory, said Google staff have been clear about plans to open up the quantum machinery through its cloud service and have pledged that government and academic researchers would get free access. A Google spokesman declined to comment.

Providing early and free access to specialized hardware to ignite interest fits with Google’s long-term strategy to expand its cloud business. In May, the company introduced a chip, called Cloud TPU, that it will rent out to cloud customers as a paid service. In addition, a select number of academic researchers are getting access to the chips at no cost.

While traditional computers process bits of information as 1s or zeros, quantum machines rely on "qubits" that can be a 1, a zero, or a state somewhere in between at any moment. It’s still unclear whether this works better than existing supercomputers. And the technology doesn’t support commercial activity yet.

Still, Google and a growing number of other companies think it will transform computing by processing some important tasks millions of times faster. SoftBank Group Corp.’s giant new Vision fund is scouting for investments in this area, and IBM and Microsoft Corp. have been working on it for years, along with startup D-Wave Systems Inc.

In 2014, Google unveiled an effort to develop its own quantum computers. Earlier this year, it said the system would prove its "supremacy" -- a theoretical test to perform on par, or better than, existing supercomputers -- by the end of 2017. One of the presentation slides viewed by Bloomberg repeated this prediction.

Quantum computers are bulky beasts that require special care, such as deep refrigeration, so they’re more likely to be rented over the internet than bought and put in companies’ own data centers. If the machines end up being considerably faster, that would be a major competitive advantage for a cloud service. Google rents storage by the minute. In theory, quantum machines would trim computing times drastically, giving a cloud service a huge effective price cut. Google’s cloud offerings currently trail those of Amazon.com Inc. and Microsoft.

Earlier this year, IBM’s cloud business began offering access to quantum computers. In May, it added a 17 qubit prototype quantum processor to the still-experimental service. Google has said it is producing a machine with 49 qubits, although it’s unclear whether this is the computer being offered over the internet to outside users.

Experts see that benchmark as more theoretical than practical. "You could do some reasonably-sized damage with that -- if it fell over and landed on your foot," said Seth Lloyd, a professor at the Massachusetts Institute of Technology. Useful applications, he argued, will arrive when a system has more than 100 qubits.

Yet Lloyd credits Google for stirring broader interest. Now, there are quantum startups "popping up like mushrooms," he said.

One is Rigetti Computing, which has netted more than $69 million from investors to create the equipment and software for a quantum computer. That includes a "Forest" cloud service, released in June, that lets companies experiment with its nascent machinery.

Founder Chad Rigetti sees the technology becoming as hot as AI is now, but he won’t put a timeline on that. "This industry is very much in its infancy," he said. "No one has built a quantum computer that works."

The hope in the field is that functioning quantum computers, if they arrive, will have a variety of uses such as improving solar panels, drug discovery or even fertilizer development. Right now, the only algorithms that run on them are good for chemistry simulations, according to Robin Blume-Kohout, a technical staffer at Sandia National Laboratories, which evaluates quantum hardware.

A separate branch of theoretical quantum computing involves cryptography -- ways of transferring data with much better security than current machines. MIT’s Lloyd discussed these theories with Google founders Larry Page and Sergey Brin more than a decade ago at a conference. The pair were fascinated and the professor recalls detailing a way to apply quantum cryptography so people could do a Google search without revealing the query to the company.

A few years later, when Lloyd ran into Page and Brin again, he said he pitched them on the idea. After checking with the business side of Google, the founders said they weren’t interested because the company’s ad-serving systems relied on knowing what searches people do, Lloyd said. "Now, seven or eight years down the line, maybe they’d be a bit more receptive," he added.

bloomberg.com

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/22/2017 3:32:26 AM
1 Recommendation   of 1405
 



IBM Is Worst Performer on Dow as Cloud Services Unit Falters | Data Center Knowledge




Chairwoman and CEO of IBM Ginni Rometty speaks onstage at the FORTUNE Most Powerful Women Summit in 2013 in Washington, DC. (Photo by Paul Morigi/Getty Images for FORTUNE)by Bloomberg on July 21, 2017

Gerrit De Vynck (Bloomberg) — IBM fell the most in three months after reporting revenue that missed estimates, with sales in a key unit declining for the second consecutive period.

The quarterly results, released Tuesday after the close of trading, further extend Chief Executive Officer Ginni Rometty’s turnaround plan into its fifth year without significant progress. The company, once considered a bellwether for the tech industry, was the worst performer on the Dow Jones Industrial Average Wednesday.

Revenue in the technology services and cloud platforms segment dropped 5.1 percent from the same period a year earlier, even though executives had said in April that they expected key contracts to come through in the quarter. The unit is a marker for the strength of the company’s push into newer technologies. Total revenue fell to $19.3 billion, the 21st straight quarter of year-over-year declines.

The stock tumbled as much as 4.7 percent in intraday trading Wednesday in New York, the most since April, to $146.71. The shares have lost 7.2 percent this year through the close Tuesday, and have missed out on the technology stock rally that propelled companies like Amazon.com Inc. and Alphabet to records.

International Business Machines Corp. has been working since before Rometty took over in 2012 to steer the company toward services and software, and she has pushed it deeper into businesses such as artificial intelligence and the cloud. Still, legacy products like computers and operating system software have been a drag on overall growth. Some investors are getting tired of waiting for the turnaround to catch on. Warren Buffett’s Berkshire Hathaway Inc. sold about a third of its investment in IBM during the first half of this year.

Several analysts cut their price targets on the company.

James Kisner, an analyst at Jefferies, said the “poor earnings quality aims to mask ongoing secular headwinds” in the software business and competitive pressures in services that may result in more investor disappointment. He rates the stock underperform and cut the price target to $125 from $154.

Better MarginsGross margins in the second quarter were 47.2 percent, slightly beating the average analyst estimate of 47 percent. That’s better than last quarter, when a surprise miss on margins sent the stock tumbling the most in a year.

“We will continue to see, on a sequential basis, margin improvement from the first half to second half,” Chief Financial Officer Martin Schroeter said in an interview.

Operating profit, excluding some items, was $2.97 a share, compared with the average analyst estimate of $2.74 a share. That measure got a boost from tax benefits, which added 18 cents to the per-share number, IBM said.

The company’s cognitive solutions segment, which houses much of the software and services in the newer businesses and includes the Watson artificial intelligence platform, has shown the most promise in recent periods, growing in each of the previous four quarters. Yet sales in the unit fell 2.5 percent in the second quarter.

AI CompetitionWatson, for which the company doesn’t specifically break out revenue, might never contribute a significant amount to the company, Jefferies’ Kisner said in a July 12 note.

Competition in the artificial intelligence market is heating up, with major investments from the world’s biggest technology companies, including Microsoft Corp., Alphabet Inc. and Amazon.com Inc. On top of that, hundreds of startups are jumping in.

“IBM appears outgunned in the ‘war’ for AI talent,” Kisner said. “In our base case, IBM barely re-coups its cost of capital from AI investments.”

The company’s total revenue fell 4.7 percent from the same period a year ago, and missed analysts’ average estimate for $19.5 billion.

Oppenheimer & Co. managing director Ittai Kidron said the results show IBM “isn’t out of the woods yet.”

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen7/23/2017 3:32:31 PM
   of 1405
 
The U.S. Government’s long road to adopting the cloud

By David Lumb
Increment Cloud
Issue 2 Summer 2017

On December 9th, 2010, U.S. Federal Chief Information Officer Vivek Kundra told his government peers that they would never work the same way again. Nearly two years after President Obama signed a pair of executive orders on his first day in office promising a new era of government transparency and disclosure, Kundra gave a presentation reinforcing a new “Cloud First” policy that sought to harness the increasingly powerful remote processing model to hew down bloat and increase efficiency. It pushed each agency to transition some services to the cloud within the year and authorized an exchange program to borrow Silicon Valley’s best talent.

This motion was one of the first unified, from-the-very-top motions encouraging all federal agencies to begin transitioning some of their services into cloud computing. Not that they were ready to jump on the cloud bandwagon and begin offloading their computation and data storage to commercial vendors: agencies large and small still had to take a long look at their needs to decide how to shift their infrastructure from in-house IT to external providers. Kundra’s vision was more aspirational than immediately instructive; many agencies are still in this process nearly seven years later.

“It was an early signal of the ultimate direction, but it was a little too early for the government to embrace. There was a pretty steep learning curve ahead for the federal government,” said Dr. Rick Holgate, analyst for Gartner and former Chief Information Officer (CIO) for the Bureau of Alcohol, Tobacco and Firearms. “It was aspirational more than watershed because it didn’t necessarily make it easier for [the] federal government to move to the cloud.”

Government agencies had been contracting third party vendors for cloud computing services for years, but which was the first to do so might be lost to time—and semantics. What we know as cloud computing today has changed and evolved over the years, ever since vendors sold the first remote processing services to government agencies.

“The government’s been involved in clouds for eight years, but when you get beyond eight years or so, it becomes a whole taxonomy discussion about what cloud is,” said Shawn P. McCarthy, Research Director at analysis firm IDC.

Cloud computing is still a young field, but its terminology has somewhat solidified. The definition, as finalized in 2011 by the National Institute of Science and Technology (NIST) after 15 prior versions, is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Today, government agencies contract external vendors for cloud computing solutions for the same reasons enterprise clients do: to entrust critical systems to providers who will maintain, modernize, scale, and reliably keep them online. There are a few more hoops to jump through if a company wants to sell their service to a government agency—hoops which differ depending on whether the company is selling to the federal, state, or local levels. But, so long as they meet requirements and become authorized by single agencies or broad security baselines, any vendor can (theoretically) pitch their products to agencies, from single Software as a Service (SaaS) solutions to Platform as a Service (PaaS) offerings all the way up to selling Infrastructure as a Service (IaaS) that agencies build entire systems on.

Government agencies contract external vendors for cloud computing solutions for the same reasons enterprise clients do: to entrust critical systems to providers who will maintain, modernize, scale, and reliably keep them online.
The potential for a cloud computing company to open up their business to the government market is enticing, but meeting the government’s safety guidelines can be a lengthy process. Is jumping through all these regulatory hoops worth it to businesses? Usually yes, McCarthy said, though qualifying to offer products— the coveted authority to operate (ATO)—doesn’t guarantee companies a single customer.

“A lot of people do it because, by going through the approval process and getting [General Services Administration] scheduled, you are listed as a serious player on the federal market. That said, government buyers look at price points and whether the product really meets their requirements—they could have tens to hundreds of boxes to check. They may never need the product you went and got approved,” said McCarthy. “On the flip side, if you have a wonderful solution, you have access to thousands of potential new clients. It’s not a magic bullet to get GSA scheduled, but for most people, we do recommend they jump through the hoops—but keep an open mind.”

Where cloud computing started is fodder for another piece, but one of the earliest vendors to start selling to the government was RightNow, founded by Greg Gianforte (yes, the same newly-elected Representative of Montana). By one of that company’s manager’s recollection, they started hosting clients’ computing needs on their own servers shortly after 2000, during some of the first nascent moments of cloud computing.

Then, at the tail end of 2002, Congress passed landmark legislation that codified digital security practices for government agencies’ IT setups. The E-Government Act of 2002 established the Office of Electronic Government under the Office of Management and Budget and the person heading it, its Federal Chief Information Officer of the United States. The first of these, Vivek Kundra, held the position for eight years.

The E-Government Act had other provisions, but most germane to cloud computing was the Federal Information Security Management Act of 2002 (FISMA), which requires each agency to develop its own Information Security protocols according to how much risk they deemed appropriate. Agencies, or the vendors they contracted to build their systems, had to ensure that their services were FISMA-compliant—or, more specifically, FIPS-compliant (Federal Information Processing Standard). NIST requirements 199 and 200 are two of the mandatory security standards that rolled out over the next few years. In short, these are the specific security requirements government bodies select according to their needs, with increasing levels of severity, that vendors must meet.

As defined, these were arbitrary requirements set up by each department for all of their IT systems—which, through most of the 2000s, was mostly done in on-premises data centers. But cloud services were gaining traction in the enterprise sphere. Nearly two years after Obama was inaugurated, the Office of Electronics in Government gave an official nudge to push agencies into the cloud computing game. In December 2010, Federal CIO Kundra presented the “25 Point Implementation Plan to Reform Federal IT Management,” which set bold goals to reduce tech infrastructure bloat, which could partially be solved by migrating operations to the cloud. It challenged government bodies to cut or turnaround a third of their underperforming projects within 18 months and, under a new “Cloud First” mentality, shift at least one system to the cloud in the next year. Meanwhile, the plan proposed for the Office of Management and Budget to oversee cutting 800 federal data centers government-wide by 2015. Soon, the government had a bona-fide cloud computing strategy.

At the time, this was still somewhat early—a goal-setting aspiration rather than a comprehensive plan to organize existing activity. Agencies weren’t ready to immediately sign their operations over to a vendor: Logistically, there was plenty of work ahead to vet vendors, plan migration strategies, and prioritize which ones should make the jump first. By the next year, agencies had started following the IT Reform motion in their own backyards. The Department of Defense had closed eight data centers, with 44 total slated to sunset by the OMB’s target date in 2015. The DoD had also started its own program to invest in Silicon Valley’s solutions by introducing its IT Exchange Program, borrowing professionals for 3-month to year-long stints to learn industry best practices.

As 2011 came to a close, Kundra left his position as federal CIO, and his successor Steven VanRoekel introduced the second major security framework, the Federal Risk and Authorization Management Program (FedRAMP), which operates under the General Services Administration (GSA). While agencies had set their own IT security assessment methodologies following FISMA’s loose guidance, FedRAMP outlined security protocols specifically for agencies engaging in cloud services.

FedRAMP mandates that agencies use NIST SP 800-53 as a set of guidelines, among other security controls, and these apply to any cloud service providers (CSPs) that want to sell services to agencies. Ergo, CSPs would now have to be “FedRAMP compliant,” which is an authorization that, once met and approved by the FedRAMP board, qualifies cloud products for theoretically any agency (rather than getting FISMA-authorized on an agency-by-agency basis). This was a huge leap, and only made possible because IT security is a universal concept. For the first time in government history, every agency had been made to abide by one set of security protocols.

CSPs aspiring for FedRAMP approval submit their products to a review board known as the Joint Authorization Board (JAB), which is made up of CIOs from the DHS, DoD, and the GSA, which examines every potential service. Given FedRAMP’s nominal personnel, it only authorizes 12-14 CSP services per year. But there is an alternative, which is occasionally faster: CSPs can have agencies themselves run through the same vetting process, and after approval by JAB, the service receives the same FedRAMP seal of approval. Often, this is quicker: about two-thirds of the 86 FedRAMP-approved services were authorized through agencies.

Those companies which have earned FedRAMP compliance have their products listed in an online catalogue, a site that also tracks which requests for authorization are still under review. Most notably, FedRAMP only applies to unclassified data: Agencies dealing with classified data, including those in the intelligence community, still retain their own secretive security protocols.

But the need for cloud services that traffic unclassified data is huge. The not-so-secret secret of government bodies’ cloud computing requirements is that they need many of the same things that commercial businesses do, and for the same reasons.

The not-so-secret secret of government bodies’ cloud computing requirements is that they need many of the same things that commercial businesses do, and for the same reasons.
“Agencies tend to prefer an enterprise solution when they can, when it makes sense,” said McCarthy.

Offloading responsibility to develop and maintain IT to a cloud computing provider has the same appeal to a government client as to an enterprise one: Flexible and custom solutions that can be scaled. But like commercial clients, there’s no singular “government cloud” that agencies work within, and each FedRAMP-compliant vendor offers very different solutions.

“It really is an ecosystem of providers that are different from last year or even two years ago and that provide different levels of cloud service,” said Bryna Dash, Director of Cloud Services at IBM.

Smaller, more agile providers have been selling cloud services to the government for years, but the larger hosting providers were some of the first to be officially welcomed into the FedRAMP pantheon. Around 2013, the nascent Amazon Web Services became the first of them to pass FedRAMP and NIST security regulations, McCarthy reckoned, and that sent a signal that moving to the cloud was worthwhile.

“When AWS became a robust, respected platform for government computing and nailed down its very stringent requirements, that sent a message to other agencies that cloud is here in a way that’s highly secure and tends to be cheaper,” said McCarthy. “Suddenly you have agencies with the most stringent security requirements you can imagine, and they’re suddenly getting what they need in AWS.”

The turning point: 2013

Amazon’s FedRAMP authorization was one of the main reasons analysts deem 2013 to be a real watershed year for government’s involvement in cloud computing. Shortly thereafter, Amazon’s product dedicated to the government cloud set up interactions with the federal intelligence community, and meeting their strict security requirements with a cloud product was an accomplishment.

Amazon barely beat Microsoft in the race to pass government regulations. Others followed, including IBM, which was officially cleared to sell cloud computing services to government bodies in November 2013. By the next year, they’d already opened the first of several data centers opened specifically for use by government agencies—dedicated data centers that were physically and proximally isolated from civilian or enterprise data being a potential requirement for some agencies.

The additional resources Amazon and Microsoft have invested into their government cloud offerings has likely given them an edge when competing for contracts: Back in early 2013, despite a competitive bid by IBM, who was offering a less-expensive solution, the CIA chose Amazon to build cloud infrastructure because Amazon’s bid offered a “superior technical solution.” In this sense, what Amazon and Microsoft can offer—what they can build for agency clientele due to their extensive investment—gives them an advantage. While the two are the dominant operators in the government cloud, they aren’t alone, owning nine of the 86 FedRAMP-approved offerings in the GSA catalogue. (You might wonder where Google is in all this: despite getting onboard with Federal CIO Kundra’s attempts to launch app marketplace Apps.gov back in 2009 and applying for FISMA approvals, Google’s portion of the government cloud computing market is a “distant third” according to Gartner analyst Holgate.)

As befits a cybersecurity landscape that continues to evolve, the government’s cloud computing vendor requirements are also changing. FISMA, for example, was amended in 2012 and updated/modernized in 2014. Whenever NIST updates SP 800-53, FedRAMP updates along with it. And security requirements haven’t just evolved—they’ve expanded. FedRAMP launched with Low and Moderate security category clearances, which require providers to satisfy 125 and 326 controls, respectively. The Department of Defense even has its own particular set of security controls that it started transitioning from its own security protocols to FedRAMP into a new authorization called FedRAMP Plus, which launched in the middle of 2016 (though the DoD, not JAB, still oversees these protocols).

A year ago, FedRAMP finally released its High security category requiring CSPs to satisfy 421 security controls, an authorization that finally permitted commercial vendors to agencies looking for particularly sensitive solutions. Unsurprisingly, Amazon, Microsoft and CSRA received approval to operate at the High level in June 2016, and remain the only three companies to offer products in the FedRAMP catalogue at such a level (though more are waiting in the FedRAMP review queue).

Regardless of the currently small number of providers and the extensive vetting for approval, the government cloud is growing. Total “cloud spend” was expected to reach $37.1 billion in 2016, according to an IDC estimate; by 2020, the analysis firm forecasts that spending on cloud services will almost equal budget expenditures on traditional IT.

This year’s forecast of government spending on cloud computing, however, was a shocking decrease from 2016. Initial estimates by IDC anticipate a 16 percent drop in budgeted expenditure for cloud solutions. McCarthy believes this is because new projects are already developed on the cloud, while the easier old things to transition to the cloud like email systems, storage, and websites have already been uploaded. The remaining systems are transitioned on a case-by-case basis. But he doesn’t believe this is necessarily an end to growth in government spending on the cloud.

“I consider that a temporary blip,” said McCarthy. “So, 16 percent budgeted less than last year—and in reality, probably less than that. Because the fiscal year ends in September, we won’t hear the real numbers until October or later.”

And then there are the cloud computing applications that haven’t been conceived yet.

“My perspective is that we’re still in the early phases of cloud adoption in general. The ability to take advantage of technologies that are forward-thinking, to take advantage of the cloud, plus the things we don’t know about that will emerge in two or three or five years—the long-term benefits of cloud adoption,” said Greg Souchack, IBM Federal Partner for Cloud and Managed Services.



Over the last few years, another concept has entered the government cloud conversation: open standards. It wasn’t always: After all, it’s easier to retain customers if your solution is proprietary and inflexible. But the tide has shifted in the last few years. Open standards proponents like those at IBM champion the practice as not just unshackling clients from dependency, but embracing the free flow of information.

“Point is, if they move to another cloud provider, they aren’t reinventing the wheel” said Lisa Meyer, Federal Communications Lead at IBM.

“If you build on one cloud provider, you should be able to move between cloud providers and different vendors, shift to where innovation is and, quite frankly, to the right price point,” agreed IBM’s Dash. “That maintains competition.”

But to get more competition, FedRAMP will need to move faster in approving requests. The small-staffed FedRAMP review board had typically taken 12 to 18 months to authorize a cloud service. With 67 CSPs still waiting in the review queue, FedRAMP isn’t ignoring its slow approval process. They’ve worked on (relatively) speeding up the process: In the last round, FedRAMP required all applicants to submit by December 30th, 2016, announced those who’d earned review in late February 2017, and expect to confirm the qualified in 4-5 months.

FedRAMP has also floated the idea of a new classification, FedRAMP Tailored, which would ease control requirements on a case-by-case basis for proposed systems that are deemed low-risk and cheap to implement: services like collaboration tools, project management, and open-source development. Once approved for consideration by the Joint Authorization Board, the low-risk services could be given an ATO in just over a month. Despite its low-risk classification, services approved for Tailored use are still granted FedRAMP’s pan-agency seal of approval, allowing the CSP to sell it to other government bodies.

Which is a good thing, since the government cloud’s appetite is increasing. 112 different agencies are currently using FedRAMP-compliant cloud services in a mix of PaaS, IaaS and SaaS products. But over 85 percent of the CSP products awaiting JAB review are SaaS, which will logically build on top of the IaaS and PaaS foundations that agencies have been building on for years.

Back in June, the White House held a technology summit to discuss modernizing the government’s technology—and ahead of it, the administration’s director of strategic initiatives Chris Liddell noted that only “three to four percent” of government operations are currently on the cloud. Liddell, the former chief financial officer at Microsoft, said that the White House’s goal for the summit was to cultivate a “government tech” industry in the private sector. Whether that nudges disparate companies into a coalesced niche more visible to the main cloud market, and to the public at large, is anyone’s guess.

David Lumb is a freelance journalist focusing on tech, culture, and gaming. In the years since graduating from UC Irvine, he’s written dozens of feature stories for Fast Company, Playboy, Engadget and Popular Mechanics. He lives in Brooklyn, New York and strives to find burritos that will meet his Southern California-grown standards.

increment.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen8/2/2017 9:24:50 PM
   of 1405
 
Inside Salesforce’s Quest to Bring Artificial Intelligence to Everyone

Author: Scott Rosenberg
backchannel
l08.02.1707:00 am




Shubha Nabar, director of data science, at the Salesforce office in San Francisco.
Photography: Jason Henry. Photo direction: Michelle Le.
____________________________



Optimus Prime—the software engine, not the Autobot overlord—was born in a basement under a West Elm furniture store on University Avenue in Palo Alto. Starting two years ago, a band of artificial-intelligence acolytes within Salesforce escaped the towering headquarters with the goal of crazily multiplying the impact of the machine learning models that increasingly shape our digital world—by automating the creation of those models. As shoppers checked out sofas above their heads, they built a system to do just that.

They named it after the Transformers leader because, as one participant recalls, “machine learning is all about transforming data.” Whether the marketing department thought better of it, or the rights weren’t available, the Transformers tie-in didn't make it far out of that basement. Instead, Salesforce licensed the name of a different world-transforming hero—and dubbed its AI program Einstein.

The pop culture myths the company has invoked for its AI effort—the robot leader; the iconic genius—represent the kind of protean powers the technology is predicted to attain by both its most ardent hypesters and its gloomiest critics. Salesforce stands firmly on the hype side of this divide—no one cheers louder, especially not in AI promotion. But the company’s actual AI program is more pragmatic than messianic or apocalyptic.

This past March, Salesforce flipped a switch and made a big chunk of Einstein available to all of its users. Of course it did. Salesforce has always specialized in putting advanced software into everyday businesses' hands by moving it from in-house servers to the cloud. The company’s original mantra was “no software.” Its customers wouldn’t have to purchase and install complex programs and then pay to maintain and upgrade them—Salesforce would take care of all that at its data centers in the cloud. That seems obvious now, but when Salesforce launched in 1999 it sounded as revolutionary as AI does to us today.



Jason Henry
_______________________

Talkin’ revolution has been good for Salesforce. The firm now has 26,000 employees worldwide, and it has pasted its name on the city’s new tallest skyscraper. Its founder, Marc Benioff, is a philanthropist who has put his own name on hospitals and foundations. Despite all this, in its own world of B2B (business-to-business) software, Salesforce still holds onto its scrappy upstart self-image.

So naturally, when the AI trend took off, the people inside the company and the experts they recruited coalesced around an idealistic mission. The team set out to create “AI for everyone”—to make machine learning affordable for companies who’ve been priced out of the market for experts. They promised to “democratize” AI.

That sounds a bit risky! Can we trust the people with such awesome powers? (Cut to chorus of Elon Musk, Stephen Hawking, and Nick Bostrom singing a funeral mass for humanity.) But what Salesforce has in mind isn’t all that subversive. Its Einstein isn’t the guy who overthrew centuries of orthodox physics and enabled the H-bomb; he’s just a cute brainiac who can answer all your questions. Salesforce’s populist slogan is simply about making a new generation of technology accessible to mere mortals. Other, bigger companies—Microsoft, Google, Amazon—may outgun Salesforce in sheer research muscle, but Salesforce promises to put a market advantage into its customers’ hands right now. That begins with the mundane business of ranking lists of sales leads.



“What do I work on next?”

Most of us ask that question many times every day. (And too many of us end up answering, “Check Facebook” or “See if Trump tweeted again!”) To-do apps and personal productivity systems offer some help, but often turn into extra work themselves. What if artificial intelligence answered the “next task” question for you?

That’s what the Salesforce AI team decided to offer as Einstein’s first broadly available, readymade tool. Today Salesforce offers all kinds of cloud-based services for customer service, ecommerce, marketing and more. But at its root, it’s a workaday CRM (customer relationship management) product that salespeople use to manage their leads. Prioritizing these opportunities can get complicated fast and takes up precious time. So the Einstein Intelligence module—a little add-on column at the far right of the basic Salesforce screen—will do it for you, ranking them based on, say, “most likely to close.” For marketers, who also make up a big chunk of Salesforce customers, it can take a big mailing list and sort individual recipients by the likelihood that they’ll open an email.

But wait, what qualifies this as artificial intelligence? Anyone can tell a spreadsheet to sort a list based on different factors. The machine learning difference is simple but profound: The program studies the history of the data and figures out for itself which factors best predict the future—and then it keeps adjusting its model based on new information over time. The more data, the subtler and more powerful the answers, which is why Einstein can work not only from columns of basic Salesforce data but also from information like sales email threads that it parses and images that it reads.



An Einstein character at the Salesforce office in San Francisco.
Jason Henry
___________________________

Salesforce director of product marketing Ally Witherspoon uses the example of a solar-panel sales outfit using the machine learning tool to discover that a key factor in predicting a customer’s chances of saying “yes” is whether the house’s roof is pitched in a solar-friendly way. Further down the road, a different deep learning-style program could check satellite photos of different properties and automatically tag homes by roof geometry.

This roof info might start out as a major ingredient in how the machine learning program sorts its list—and, in one of Einstein’s nifty design flourishes, users can click to reveal which factors shaped each priority scoring. If users are going to trust the tool, that kind of transparency helps. But what happens when all the sales reps have learned to ignore the folks whose roofs are flat?

As Salesforce President of Technology Srini Tallapragada explains, “At a certain point, a column of data can become useless—it becomes a best practice, so it loses predictive value. The model has to keep changing.”



That is cool. It’s also pretty standard-issue machine learning tech for 2017. But to get it up and running at your company, you’d need to spend a ton of time and effort building a model that understands what’s important in your business, and then cleaning up your data to get good results. That’s the reason your bank, your insurance company, and your doctor aren’t all using AI already, explains Vitaly Gordon, who left LinkedIn in 2014 to become one of Salesforce's machine learning pioneers. Ironically, for a field predicated on the ideas of automating human work, “It’s an access to people problem,” Gordon says. These companies probably know more about you than Facebook or Google, but they can’t compete for the data scientists who know how to mine the mountains of information.



Vitaly Gordon, VP of data science and one of the earliest Salesforce AI engineers.
Jason Henry
_____________________

Right now, the demand for these experts is like the run on internet routing gurus in the ’90s or SEO experts in the 2000s—even crazier than the Bay Area housing market. If you’re the likes of Facebook, Google, or Amazon, you can hire the field’s leading lights and put them to work optimizing algorithms and inventing new ways of serving billions of customers with more artificial intelligence. If you’re anyone else, you’re pretty much screwed. You’ll either pay a fortune to a giant consultancy to custom-build a machine learning system, or you’ll watch from the sidelines. What Salesforce is selling is the idea that if your business is in its hands, you’re going to get the benefit of AI without fighting for that talent to customize it for you. It all comes in the box—or would, if there were a box. (Our metaphors need to keep changing, too.)

Salesforce has 150,000 customers, most of whom have customized the system for their own needs and kinds of data. The Salesforce “multi-tenant” approach means that each company’s data is kept separately, and when a customer adds a custom data field, Salesforce doesn’t even know the nature of the information.

To bolt Einstein onto each of these businesses’ unique software configurations, Salesforce’s AI braintrust realized that it needed a new approach. “There aren’t enough data scientists in the world to build all the predictive models we need,” says John Ball, Salesforce Einstein’s general manager. Just as AT&T realized a century ago that if it stuck with manual operators, everyone in the US would end up sitting at a switchboard, Salesforce saw that automation was inevitable.

This is where Optimus Prime comes in. (Inside Salesforce, developers still use that name.) It’s the system that automates the creation of machine learning models for each Salesforce customer so that data scientists don’t have to spend weeks babysitting each new model as it is born and trained to deliver good answers. Optimus Prime is, in a sense, an AI that builds AIs—and a tool whose recursive nature is both beautiful and unsettling.



John Ball, general manager for Salesforce Einstein.
Jason Henry
________________________________

“Normally a data scientist studying one problem might take several weeks to a month to come up with a good model for a problem,” explains Shubha Nabar, Salesforce’s director of data science. “With this automated layer, it takes just a couple of hours.”

Today, the fruits of Optimus Prime are chiefly available in neatly packaged features of the Salesforce cloud applications that customers can turn on by checking a box. Next, Salesforce plans to open up the technology by steps. First, users will be able to extend Einstein’s capabilities more widely to more of their customized data. Then, a point-and-click interface will let non-programmers build custom apps for users. “We want to allow an admin—not a data scientist, not even really a developer—to predict any field in any object,” Ball says. Even further down the line, Salesforce intends to expose more of the guts of its machine learning system for external developers to play with. At that point, it will be competing directly with all the AI heavyweights, like Google and Microsoft, to dominate the business market.



Salesforce recently released research that claims AI’s impact through CRM software alone will add over $1 trillion to GDPs around the globe and create 800,000 new jobs. The company has gone all-in on AI since it first announced Einstein in 2016. Benioff said then, “AI is the next platform—all future applications, all future capabilities for all companies will be built on AI.”

Benioff even told analysts on a quarterly earnings call that he uses Einstein at weekly executive meetings to forecast results and settle arguments: “I will literally turn to Einstein in the meeting and say, ‘OK, Einstein, you’ve heard all of this, now what do you think?’ And Einstein will give me the over and under on the quarter and show me where we’re strong and where we’re weak, and sometimes it will point out a specific executive, which it has done in the last three quarters, and say that this executive is somebody who needs specific attention.”

That may sound a little Big Brother-ish, but everyone I spoke with at Salesforce is careful to keep the AI talk friendly. Einstein isn’t after your job—it just wants to help you work smarter. Nonetheless, the AI universe is mined with vague fears about the future of work and questions about bias, privacy, and data integrity. As Salesforce expands its AI projects, it will inevitably tangle with them.

One of Salesforce’s advantages in attracting talent in the field is that, under Benioff’s command, the company has built a strong reputation for having a social conscience. It’s the anti-Uber. That was one of the factors that mattered to Richard Socher, an AI hotshot whose company, MetaMind, was acquired by Salesforce a year ago.

Socher, who now leads Salesforce’s research efforts, specializes in deep learning techniques that help software understand natural language and images. He teaches a wildly popular AI course at Stanford, and co-publishes papers with titles like “Pointer Sentinel Mixture Models” and “Your TL;DR by an AI: A Deep Reinforced Model for Abstractive Summarization.”





Richard Socher, who heads Salesforce AI research.
Jason Henry
_____________________________

With his unruly straw mop of hair, Socher still looks like the grad student he was not that long ago—and he has a youthful enthusiasm for testing the limits of what we think AI can handle.

“I want to be able to have more and more of a real conversation in the future with a system that clearly has tackled a diverse range of intelligent capabilities,” he says. For now, that means building learning routines that can “read” arbitrary paragraphs and then correctly answer questions about them, and exploring new methods of building AI systems that can do more than one thing at a time.

As the technology grows more powerful, Socher says, we can’t put off the conversations about its ethics. “AI is only as good as the data it gets,” he says. “If your data has certain suboptimal human biases in it, your AI will pick it up. And then you automate it, and it makes that same mistake hundreds of millions of times. You need to be very careful.”



Sales work can be painfully hard, and salespeople have to stay positive even when their work gets ugly and desperate. Salesforce has thrived for two decades by embracing that optimism. Where Google’s AI efforts are all about perfecting information access and Facebook’s aim to connect people more intelligently, Salesforce wants to make the world a better place by helping customers smarten up their work days.

At times, Salesforce’s portrait of a future powered by its AI sounds too good to be true. For a down-to-earth assessment of the company’s plans from an outsider, I turned to Pedro Domingos, an AI expert at the University of Washington and author of The Master Algorithm.

Domingos says Salesforce is “a bit of a latecomer” to the field and may find it harder than it expects to integrate AI fully at deeper levels of its products. But he thinks the company is on the right track: At this stage in AI’s evolution, there’s more to be gained from putting basic tools in more people’s hands than from squeezing an extra few percentages of efficiency from an algorithm.

Domingos also says that Salesforce’s relatively tardy entrance to AI—compared with, say, IBM or Google—shouldn’t necessarily hold it back. “They’re still a small player in this space. But other companies came from behind and got pretty far pretty quickly—look at Facebook. Just because you’re a late starter doesn’t mean that in a few years you can’t become a leader.”

Salesforce faces a crowded field in the fight to put AI tools to work on behalf of the warm-handshake crowd. Competitors include giants like Microsoft (with its LinkedIn Sales Navigator) and Oracle, as well as smaller rivals like SugarCRM and startups like Conversica (the latter of which uses AI to automate conversations with incoming sales leads). If Salesforce does succeed in moving to the front rank of today’s crazy corporate AI race, company insiders point to one advantage as its not-so-secret weapon: its well-tended warehouses of consistently labeled and organized customer data.

Those much-competed-for, highly paid data scientists everyone is trying to hire? They spend enormous amounts of time today “preparing data,” which means figuring out how to prep piles of information so that it can be digested by machine learning programs and produce good results. There is a whole lot of grooming and massaging of information that has to take place before most AI systems can even begin to start making predictions.

This represents an ironic breakdown in the ethos of automation that underlies AI. Too often today, Domingos points out, the IBMs and Accentures of the world are just throwing armies of experts at their customers’ problems. “What they do at the end of the day is, they actually have human labor do this stuff,” he says, “That makes money but is not scalable.”

But Salesforce customers have all already entered their data into a single software platform, even if many of them have added their own custom flourishes. “People put everything in there,” says Salesforce technology president Tallapragada. Salesforce doesn’t look at the content of its customers’ data, but it does know how a lot of it is organized. “The Salesforce advantage is the metadata. That lets us automate stuff,” says data science director Nabar.

For all the utopian dreams and Skynet nightmares that today’s advances in artificial intelligence provoke, the winners and losers in this transition will probably be determined by what computer scientists call “data hygiene.” In other words: No matter how smart our programs get in the AI future, tidiness still counts. Clean up after your work, and remember to wash your files before you leave.

Let others conquer Go and solve knotty theorems. Salesforce could achieve victory through neat power.



wired.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.