More signs are pointing to the benefits of the private cloud in the enterprise. Credit: Dirk Ercken
Storage professionals showed increased interest in private-cloud storage this year, reflecting how cautious enterprises remain about keeping data in public clouds, according to a recent survey from TheInfoPro, a service of 451 Research. The trend is altering the cloud-solutions market as demand for private clouds help storage vendors compete with large public-cloud providers, researchers said.
Private-cloud storage, also called internal-cloud storage, runs on a dedicated infrastructure in the data center, offering the same scalability benefits of public-cloud storage while addressing security and performance concerns.
More cash for clouds
Interviews with 260 storage professionals working for large and midsize enterprises in North America and Europe provided the basis for the findings. For the purposes of the study, large enterprises have at least $1 billion in revenue, while midsize enterprises have annual revenue between $100 million and $999 million.
The survey, which was conducted earlier this year, identified private-cloud storage as the second most likely technology to be added to storage budgets.
A separate study released in August by market research specialist Vanson Bourne similarly found that 60 percent of enterprise IT decision makers have moved or are considering moving applications or workloads either partially (41 percent) or completely (19 percent) to private clouds. Respondents from the United States and United Kingdom cited limitations of the public cloud or benefits of other platforms as reasons for considering the private cloud.
Flash mostly used in hybrid storage
Other findings released last month by TheInfoPro survey included:
Storage capacity is more than doubling every two years, exceeding the rate of Moore’s Law.
Solid-state storage, or flash storage, is mainly being used as part of a hybrid solution for enterprises, with 37 percent taking that approach, compared with only 6 percent for all-flash arrays.
Decoupling storage hardware from the software controller presents a promising opportunity for vendors selling software-defined storage, as only 31 percent of respondents viewed coupling as “very important” or “extremely important.”
Architecting for performance is often reactive, as 48 percent of large and midsize enterprises have no specific IOPS (input/output operations per second) targets for applications.
Object storage, which forms the basis of storage solutions offered by service providers, is still mostly seen as a compliance solution, revealing a misconception, researchers said.
“There are two major forces working on storage today — solid-state transforming storage architectures in data centers and software-defined storage transforming provisioning and capacity choices,” said Marco Coulter, vice president of TheInfoPro. “As enterprises move from solution designers to service brokers, the conversations with business partners are evolving from bits and bytes to services and APIs.”
Google currently uses renewable energy to power more than 34 percent of its operations, including its massive data centers. The Google data centers are the focal point of all its energy use. To get an idea of how the data centers work, check out Google’s Inside Our Datacenters where you can get a guided tour of a data center, explore one in Street View, or check out Datacenter locations.
A floating data center?
Google is in the process of building several large floating barges. One is at the end of Treasure Island, a former Navy base in the middle of San Francisco Bay. The other is outside of Portland, Maine. These four-story structures are made from cargo containers. In a recent press release Google said, “Although it’s still early days and things may change, we’re exploring using the barge as an interactive space where people can learn about new technology.”
This statement is tentative drawing speculation that these barges could be used as new Google data centers. Google could use water for raw power as well as for cooling. Google’s push for green energy will no doubt be a part of this latest endeavor.
Google is already using wind power for its data center in Hamina, Finland, and it has made several other wind power purchases. In 2012 it signed an agreement with GRDA utilities to supply wind power for its Oklahoma data center. Google has also installed 1.7 MW of power at its corporate headquarters in Mountain View, Calif.
Most recently Google purchased the entire output of the 240 MW Happy Hereford wind farm outside Amarillo, Texas. The wind farm is expected to start producing energy in late 2014.
In 2013 Google began working with Duke Energy for renewable energy in North Carolina, where it has a data center in Lenoir, N.C. It has invested more than $1 billion to its renewable energy project. If you want to know more, check out the Google Green website and download the PDF on expanding renewable energy options through renewable energy tariffs.
It sure is good to see a successful company investing in green energy. This is, without a doubt, one of the best ways for us to make a move to green energy.
Google is endeavoring to become synonymous with the word “green.” The Google Green website states, “At Google, we’re striving to power our company with 100 percent renewable energy. In addition to the environmental benefits, we see renewable energy as a business opportunity and continue to invest in accelerating its development. We believe that by helping power more of the world with renewable energy, we’re creating a better future for everyone.”
When it comes to storage, hard drives have more and more competition — but they are likely to stay around for the next few years, sources agree.
But while it has competition, hard drive technology has not been rapidly changing. While the power of computer chips have been doubling every other year for decades, in conformance with Moore’s Law, no such force is at work with hard drives, notes John Rydning, research director at industry analyst firm IDC. Performance has been improved mostly by increasing the areal density of the platters (i.e., the amount of bits that can be crammed into a square inch) but density improvements are coming slower and slower, he says.
Two types of hard drives are currently used in office computers: 2.5-inch and 3.5-inch form-factors (referring to the diameter of the spinning platters), explains Rydning. The 2.5-inch units are typically used in portable machines while the 3.5-inch units are typically used in desktop machines.
Currently the highest capacity available in the 3.5-inch form-factor is about four terabytes, while the highest capacity available in the 2.5-inch form-factor is about 1.5 terabytes, he notes. (A terabyte is a thousand gigabytes, or one trillion bytes.)
Another way to increase density has been to add platters. Rydning explains that there are ongoing efforts to make the platters thinner, not only to add more platters to each drive but so that hard drives can fit into thin laptops. Currently, mobile machines may have as many as two platters, while desktop machines usually have three but may have as many as five.
The I/O speed of hard drives is determined by their rotational speeds, notes David Hill, principal at an analyst firm called the Mesabi Group. For about the last decade the vendors have been offering the same three speeds: 7,200, 10,000, and 15,000 RPM. Mechanical limitations make it unlikely that they can rotate much faster, he says.
In terms of technology, the main competition for hard drives is solid state drives, using NAND flash memory devices. Such memories are non-volatile, meaning their contents survive after the power is turned off (unlike RAM), but they are faster than hard drives and have no moving parts.
However, solid state memory costs about a dollar per gigabyte while hard drive storage costs about 10 cents per gigabyte, Rydning notes. But vendors are spanning the difference by offering hybrids — hard drives with solid state caches — which store the most frequently used files and accelerate access. The caching algorithm is very important, he says.
If you use flash up front that will save you money on software costs, adds Hill. “You’ll be able to use storage more efficiently and break even on flash.”
Flash memory is the only real storage option for tablets, as they lack hard drives. (Cell phones and memory sticks also rely on it, of course.) Meanwhile, the users have begun to compare the speed of their office machines to that of their flash-based tablets, and the office machines come off unfavorably, Rydning notes. But even with solid state drives, desktop units will need retooled operating systems and applications before they can be as fast as tablets.
In the future, Rydning expects that the personal computer storage market will fragment. At the low end we can expect to see systems using conventional hard drives that may have large capacities but may not be particularly fast. At the high end we can expect to see solid state drives delivering maximum speed for those who are willing to pay for it. Mid-range systems will use hybrid technologies to selectively boost speed, he predicts.
“If you need terabytes of memory you won’t use flash storage,” Hill says. “But if you need 200 gigabytes you will.”
Hill also notes that users are increasingly interconnecting with web-based archival storage, so that their needs for local storage are no longer open-ended. Also, today’s hard drives with capacities of hundreds of gigabytes are perfectly adequate for many users.
Says Hill: “I used to run out of space, but not anymore. There could be a point of diminishing returns concerning the storage needs of the desktop.”
The growing use of VDI (virtual desktop integration) in enterprises may also cut storage demand, as one copy of the operating system is stored centrally, rather than on each desktop, facilitating upgrades and enhancing security, he adds.
It’s easy to see why flash-implemented, solid-state drive (SSD) storage has been called the miracle of the 21st century. Compared to its more pervasive counterpart — hard-disk drive (HDD) storage — flash SSD can speed the time it takes to process transaction-heavy workloads, such as Web queries, retail transactions and business analytics, many times over.
Why we love it
Flash SSD makes it possible to get your hands on information fast:
With no moving parts, flash storage eliminates the rotation and seek latency inherent to hard-disk access. Compared to hard disk storage, flash can cut access speed from milliseconds (ms) to microseconds (μs).
Flash drives access data electronically, not electromechanically like disk drives. This makes flash faster and more durable.
Flash storage retrieves data directly from flash memory. Flash can retrieve data in less than .1 ms, whereas HDD would require 3-12 ms. SSDs don’t usually require special cooling and can tolerate higher temperatures than HDDs. With no moving parts, there are fewer mechanical failures.
Unfortunately, the high cost of SSD ownership has discouraged widespread adoption — until recently, that is.
Falling prices create new opportunities
In the last few years, the price of flash storage has been coming down (Dell offer an all-flash storage solution for under $5/GB.). Lower costs open more opportunities (see infographic below) for more businesses to speed information access and stay competitive.
Now you can get hybrid flash arrays that can tier across single-level cell (SLC) and multi-cell level (MLC) SSDs. This blends the attributes of both types of SSDs into a more attractive cost per gigabyte solution.
Match the drive type to the workload
Flash arrays are perfect for I/O-intensive workloads, but flash alone may not be the most economical way to store low-priority data. While flash storage excels at fast response time, HDD-based systems —which offer high capacity at the lowest $/GB — are ideal for archiving large data sets.
To drive greater efficiencies into your storage IT, you can create hybrid arrays with high-endurance SLC SSD on tier 1 for frequent writes, high-capacity and lower-priced MLC SSD on tier 2 for frequent reads and HDDs on tier 3 for low-touch data. This reduces the overall cost of the array and still offers the benefits of high-performance storage
Compared to an all-flash array, the hybrid approach — storage tiers with dual flash drive types at the high end and spinning disks at the low end — can significantly reduce cost. In addition, you can:
Address both ends of your storage spectrum —from hot to cold data — within a single array
Ensure ultra-fast response time for your high-touch data
Provide persistent storage for your older data
The key is to match the storage array to the workload. Once you understand your workload requirements, such as how much data you access frequently and your rates of reads and writes, you can search for a flexible, scalable solution. Choose a solution that can tier across SLC and MLC SSDs to get the horsepower you need at a price you can afford. Then watch your return on investment grow.
Everybody knows that data is continuing to grow at an alarming rate across all industries. Traditionally the way this problem has been handled was to throw more hardware at it. This model isn’t very cost effective and can also introduce added complexity and require new skillsets if unfamiliar hardware is adopted. In addition most organizations are looking at ways to help reduce cost and that includes keeping the cost of staff to the essentials, meaning even if your storage is experiencing large amounts of growth it is unlikely to be met with an increase in IT professionals to support it.
This forces a lot of organizations to look for innovative ways to help manage their data growth, that doesn’t include expensive hardware solutions. This is where Windows 2012 has some key features that can help manage data growth more efficiently, by either removing shared storage in some cases or helping to further optimize existing shared storage to gain additional efficiency.
Sharing Nothing Live Migration allows you flexibility to choose if you need shared storage or not for live migration of your virtual machines, eliminating additional complexity.
Offloaded Data Transfer (ODX) allows data transfer to take place between storage hardware without requiring transfer through the server, this decreases the time taken for data transfers and reduces the workload on the servers providing much more efficiency.
File system de-duplication allows Hyper-V virtual machines to consume less space as it allows virtual machines that use the same operating system files to be de-duplicated to further increase consolidation rates and help lower overall storage costs.
Windows 2012 really presents a very innovative way to help manage storage growth for environments that have either shared storage, no shared storage or even both. A key benefit that comes with the Windows Server platform is that a lot of IT professionals already have knowledge and experience on Windows Server. There are also a lot of publicly available resources such as best practice guides, whitepapers and data sheets to help support IT professionals getting up to speed. This means existing staff doesn’t need a substantial time investment in getting up to speed with the new features and can quickly provide additional efficiency to companies.