Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Storage tips from heavy-duty users

John Brandon | Oct. 12, 2011
If you think the storage systems in your data center are out of control, imagine having 450 billion objects in your database or having to add 40 terabytes of data each week.

Blakeley says Mazda is putting less and less data on tape -- about 17TB today -- as it continues to virtualize storage.

Overall, the company is moving to a "business continuance model" as opposed to a pure disaster recovery model, he explains. Instead of having backup and offsite storage that would be available to retrieve and restore data in a disaster recovery scenario, "we will instead replicate both live and backed-up data to a colocation facility." In this scenario, Tier 1 applications will be brought online almost immediately in the event of a primary site failure. Other tiers will be restored from backup data that has been replicated to the colocation facility.

Adapting the Techniques

These organizations are a proving ground for handling a tremendous amount of data. StorageIO's Schulz says other companies can mimic some of their processes, including running checksums against files, monitoring disk failures by using an alert system for IT staff, incorporating metadata and using replication to make sure data is always available. However, the critical decision about massive data is to choose the technology that matches the needs of the organization, not the system that is cheapest or just happens to be popular at the moment, he says.

In the end, the biggest lesson may be that while big data poses many challenges, there are also many avenues to success.

 

Previous Page  1  2  3 

Sign up for Computerworld eNewsletters.