Cisco.com, for example, is a C-1 application, Thakkar says, as is authentication. "Of all of the apps, probably 10% to 15% are C-1s. There are a few C-5s, but most others fall in C-2 through C-4.
For Cisco.com, for example, "external requests go to either Richardson or Allen," Thakkar says. "The user doesn't know the difference. "Once someone hits a given DC, we try to keep them in that DC."
With MVDC the two data centers are "running in an active/active scenario on the Web and app layer, and for the database we have an active standby, so if we see an unplanned outage, the Oracle Observer sees one is not available will shift everything to the other and achieve zero data loss," Thakkar says. (The sites are linked via a 400Gbps fiber ring.)
Besides the failsafe benefits, MVDC has business benefits, Thakkar says: "We see a lot of improvement on the operation side where we plan outages. So we have www1Cisco.com sitting in Richardson, and www2cisco.com sitting here in Allen. If you need to do any maintenance on www1, we can go to our global site-select, which is our load balancer, and take www1 offline or suspend it for minutes or hours in order to do the maintenance, and then bring it back online."
Given the two data centers are in the same region, Cisco also has a bigger picture disaster recovery plan that involves a remote data center, this one in Raleigh, N.C.
The Raleigh facility serves a dual purpose. It is an AppDev environment where Cisco developers can use UCS hardware and Nexus technology, but in the event of a disaster "we can actually change the service profile in all those UCS clusters from application development to production" and reroute traffic so that "data center would go from AppDev to DR within 24 hours."
It actually takes Raleigh less than a minute to kick in, Thakkar says, so the 24 hour count is the outside number for all applications to be up and running in RTP.
How often do unplanned outages occur? "We had one or two incidents when we had a fiber go down," Thakkar says, but we had the signal relayed to another ring so there was no outage."
From a capacity management standpoint, things have also gone as expected, Cribari says. "We knew that first generation of UCS could only support up to 96GB per server. Now we can go the whole way up to 384GB, 512GB, 764GB, and those higher memory blades allow us to be more dense, and we took that into consideration. We also planned the structured cable plant to go from 1-Gig to 10-Gig and eventually to 40-Gigs, and we're going to be able to do that without requiring any additional construction."
Sign up for Computerworld eNewsletters.