In April last year Cowling and his team began the race to install additional servers in three locations fast enough to keep up with the flow of data migrating from AWS.
They built a high-performance network which allowed them to transfer data at a peak rate of over half a terabit per second. At the same time, they were scrambling to get racks into data centres quickly enough.
"We were bringing up 30 or 40 racks of hardware every day. I knew how many racks could fit in the loading dock at any given time," says Cowling.
On two consecutive days, trucks carrying racks crashed. Despite this, plus network outages and hardware errors, they hit the deadline with a month to spare.
'Optimise the hell out of it'
The resulting system holds more than 90 per cent of customer data (the remainder is with AWS) and is "three to five times faster against all the latency percentiles we track right now" says Cowling.
It's also extremely reliable: "The system was built with so many safeguards and so much redundancy. We can lose the entire east coast and still serve data because we have an entire copy elsewhere and that's very much in the design. We can lose racks and rows and entire data centres and keep running."
Cowling is now looking to eke out every efficiency and further improve performance for Dropbox's 500 million users.
He's focused on the storage of cold, less frequently used data. He's also exploring how to improve performance and reduce latency for users that are based further away from storage locations. Plus there are a slew of new products that will put extra demand on the infrastructure.
"There's no end in sight," says Cowling. "It doesn't stop, the game keeps going. It's like giving birth to a child - you've got to raise the child."
Source: Computerworld US
Sign up for Computerworld eNewsletters.