Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Terrible twins or triplets?

Paul McClure | Sept. 21, 2010
Cloud computing to drive deduplication demand

Deduplication is a major buzz word as it continues its relentless march into data centres and remote offices across the globe. The global economic scenario has re-oriented enterprises to focus more on cost-effective solutions for their business. And deduplication owes its rapid adoption rate to the fact that it is often the first level of "defence" to free up much needed funds. Funds that can be re-invested back into ongoing maintenance or new IT projects in support of the core business.

The increased use of virtual machine environments and the increased need for data retention driven by corporate and government-mandated requirements have been the key driver for the rapid adoption of deduplication solutions. I call them the terrible twins!

However, there is another major underlying driver that is expected to continue underpinning the growth in deduplication beyond a single budget cycle or fiscal year: cloud computing.

To begin with, let us take a close view of the evolution of storage. At the early stages, we had direct-attached storage - big machines, with big dedicated storage attached. Often, this storage platform was tape. Then we saw a movement to a shared, tape-based approach. This, in turn, spurred the evolution in tape capabilities and formats, notably from LTO1 to LTO2 to LTO3, then LTO4 and now with LTO5 in the near future. Each step in this evolution process resulted in an approximate doubling of capacity in tape storage. However, there are still the inherent limitations to only using tape: the serial-based nature of tapes, and the logistical delays when tapes are located off-site in a secure location.

These issues may be addressed with solutions like tape library automation and the ability to multiplex. But these solutions can only minimise the pain and cannot eliminate the fundamental issue.

Hence customers today are moving to a disk-to-disk infrastructure which delivers better backup capabilities and more importantly higher recovery speeds. The key milestone in this evolution was the realisation of storage system vendors to sell customers LESS storage.

So a whole host of storage vendors embedded deduplication into their appliances to dramatically reduce the amount of actual storage capacity required per terabyte of backup/incoming data. They did not make disk-to-disk backup feasible (that capability was already there), but they did make it more feasible financially, and definitely preferable from a logistical/power/cooling perspective.

Today, we're seeing an increase in disk-based backup driven in part by deduplication. However, most companies out there have a mixed disk and tape environment. These companies may be in the process of porting over individual departments / datasets over to disk from tape, but many of them still want to retain tape for data retention. Not recovery, but long-term data retention that may stretch across years or even decades. So they are effectively operating under a Disk-to-Disk-to-Tape (D2D2T) framework today.

 

1  2  3  Next Page 

Sign up for Computerworld eNewsletters.