| IBM Launches Its Own Shippable Cloud Data Migration Device|
Hardened storage device offers 120 TB and uses AES 256-bit encryption
One of the barriers for enterprises storing data in the cloud is data migration, a process that has traditionally been slow and costly, hindered by network limitations. IBM wants to remove this barrier for its customers with a new cloud migration solution designed for moving massive amounts of data to the cloud.
IBM Cloud Mass Data Migration is a shippable storage device, which offers 120 TB and uses AES 256-bit encryption. The device also uses RAID-6 to ensure data integrity, and is shock-proof. The device is a flat-rate, and includes overnight round-trip shipping.
The device is about the size of a suitcase, and has wheels so it can be easily moved around a data center, Michael Fork, distinguished engineer and director, cloud infrastructure, IBM Watson and cloud platform said. Fork said that the solution allows customers to migrate 120 TB in seven days.
“When you actually look at the networking aspects of this, for example if you were to transfer 120TB over a 100 Mbps internet connection, that would take 100 or more days,” he said.
Similar options on the market include the AWS Snowball Edge, which was launched last year and offers 100 TB of usable storage capacity. In June, Google introduced Transfer Appliance, which offers up to 480TB in 4U or 100TB in 2U of raw data capacity. In the chart below, Google broke down how long data transfer can take over different connections.
“Previously we supported two main transfer methods. One was an IBM solution called IBM Data Transfer service, and this allows you to ship us a USB hard drive or CD/DVD, and so you could migrate in up to 10 TBs of data pretty easily using that service,” Fork said. “The other solution IBM supports is through IBM Aspera, a network-based transfer.”
IBM Cloud Mass Data Migration is designed for any customer that has large amounts of data to migrate to IBM Cloud, Fork said, pointing to customers who move large SAP datasets or datasets for use with IBM Watson or other cognitive services.
“VMware customers are bringing to IBM Cloud large amounts of data, VMDKs, machine images, they need a fast and efficient way to move large amounts of those,” he said.
Beyond Lights-Out: Future Data Centers Will Be Human-Free
A new generation of data centers will be optimized for extreme efficiency, not for human access or comfort.
Critical Thinking, a weekly column on innovation in data center infrastructure. More about the column and the author here.
The idea of a “lights-out” data center is not new, but it is evolving. Operators such as Hewlett Packard Enterprise and AOL have been long-term proponents of remote monitoring and management to reduce, or entirely replace, the need for dedicated on-site staff. The most well-known current advocate is probably colocation provider EdgeConneX that has integrated a lights-out approach into the fabric of its business.
However, despite the efficiency benefits, lights-out, or “dark,” sites are still viewed with skepticism in some quarters; not having staff readily on-hand to deal with outages is deemed just too high-risk. Data center certification body Uptime Institute, for example, recommends that one to two qualified staff are needed on-site at all times to support the safe operation of a Tier III or IV facility.
But while lights-out may be a niche option now, developments in remote monitoring, analytics, AI, and robotics could eventually see it taken much further.
These technologies combined with the elimination of all concessions to human comfort will enable ever more efficient and available data centers, some experts argue. Technology analyst firm 451 Research recently coined the phrase “ Datacenter as a Machine” (subscription required) to define unstaffed facilities that are primarily designed, built, and operated as units of IT rather than buildings. “As data centers become more complex, with tighter software-controlled integration between components, they will increasingly be viewed as complex machines rather than real estate,” the analyst group argues.
A facility designed and optimized exclusively for IT, rather than human operators, could enjoy a range of advantages over more conventional sites:
Improved cooling efficiency: There is good evidence that facilities could be operated at higher temperatures and humidity without impacting the reliability and performance of IT equipment. Progressive operators have made efforts to move into the upper reaches of ASHRAE’s recommended, or even allowable, temperature ranges. But the approach isn’t more pervasive due in part to its impact on human comfort. IT equipment may be functional at 80F and up, but it’s not a pleasant working environment for staff. Other highly efficient forms of cooling could make things even more uncomfortable. For example, close-coupled cooling technologies, such as direct liquid immersion, capture more than 90 percent of the IT heat load in a dielectric fluid but make no concession for the human operator. For the technology to become widely deployed in conventional sites additional, inefficient, perimeter cooling would be required in some locations just to keep the operators cool.
Better capacity management: Everything from rack height to access-aisle width is designed to make it easier for staff to install and maintain equipment rather than to optimize for efficiency. But if this space requirement was eliminated, equipment (power and cooling permitting) could be fitted into a much smaller footprint with, for example, potentially much higher, robot-accessible racks.
Reduced downtime and improved safety: According to a 2016 study by the Ponemon Institute, human error was the second-highest cause (behind power chain failures) of data center downtime. Electrocution – via arc-flash or other causes – also remains a real and present threat without the correct safety precautions. Use of hypoxic fire suppression – lowering oxygen levels – also has benefits for fire safety but again makes for a difficult working environment. A facility that was essentially off-limits to all but periodic or emergency access by qualified specialists could reduce the potential for human error and minimize the risk of injury to inexperienced staff.
But if on-site staff were effectively designed out of facilities, who or what would replace them? The kind of pervasive remote monitoring platforms already used at lights-out sites -- such as EdgeConneX’s edgeOS -- would likely play an instrumental role. Emerging tools, such as data center management as a service (DMaaS), which is effectively cloud-based data center infrastructure management, or DCIM, software – could also enable suppliers to take remote control (including predictive maintenance) of specific equipment or even an entire site. Eventual integration with AI/machine learning could also lead to more IT and facilities tasks being automated and self-regulated. Robotics is also likely to play a greater role in future data center management. Indeed, if facilities are designed to optimize space, then so-called dexterous robots may be the only way to access some parts of the site.
But despite the potential, a number of impediments will need to be overcome before unstaffed data centers become widely adopted. The biggest of these is obviously the perception that such designs would introduce additional risk. As such, early adopters would probably be limited to companies that are already comfortable with some form of lights-out approach. Facilitating technologies, such as DMaaS, AI-driven DCIM, and advanced robotics, are also still very nascent.
But there are still good reasons to think that, in specific use cases, unstaffed sites will eventually become the norm. For example, new micro-data center form factors to support edge computing are expected to proliferate in the next five to ten years and are likely to be monitored remotely and only require periodic visits from specialist maintenance staff.
Ihe prognosis doesn’t necessarily have to be all bad for facilities staff. To be sure, there will be fewer in-house positions in the future, but specialist third-party facilities management services providers – capable of emergency or periodic visits -- could expand headcount to meet the expected growth in new colocation and cloud capacity.
Ironic as it may sound, the future looks rather bright for the next generation of lights-out data centers.