Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Google's latest big-data tool, Mesa, aims for speed

Joab Jackson | Aug. 11, 2014
Google has found a way to stretch a data warehouse across multiple data centers, using an architecture its engineers developed that could pave the way for much larger, more reliable and more responsive cloud-based analysis systems.

Mesa is the latest in a series of novel data-processing applications and architectures that Google has developed to serve its business.

Some Google innovations have gone on to provide the foundations for widely used applications. For example, BigTable led to the development of Apache Hadoop.

Other Google technologies developed for internal use have subsequently been offered as cloud services from the company itself. Google's Dremel ad-hoc query system for read-only data went on to become a foundation of the company's BigQuery service.

Future commercial prospects for Mesa may be somewhat limited, however, said Curt Monash, head of database research firm Monash Research.

Not many organizations today would need sub-second response times against a body of material as large and complex as Google's, Monash said in an email. Also, MapReduce is not the most efficient way of handling relational queries. That's what's led to a number of SQL-on-Hadoop technologies, such as Hive, Impala and Shark.

Also, typical enterprises should look for commercial or open-source options to keep their data warehouses consistent across data centers before adopting what Google's developed, Monash said. Most new data stores being developed today have some form of multi-version currency control (MVCC), he said.



Previous Page  1  2 

Sign up for Computerworld eNewsletters.