Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

From ‘Compulsive Connectivity’ to the Internet of Things

Zafar Anjum | April 11, 2014
In this interview with Karthik Ramarao, the NetApp APAC CTO discusses the changing storage needs of businesses in Asia.

Obviously, the SATA drives are of a much lower cost than the Flash drive. So that's certainly one form of virtualization. But if you look at virtualization as a definition, it is to provide the user a view which is completely masking the underlined complexity of the hardware, which is what we do in Clustered Data ONTAP. The underlined complexities lies in different boxes. And these are all connected to the idea of mechanisms. But as a user, I don't care. For me, it's completely virtualized. To answer your question, it is a yes. But in storage domain, the virtualization takes different context.

Big data or large data

Is there any connection between NetApp's storage technologies and big data usage?

Big data to us is a lot more than the traditional definition of big data. It's a very fluid subject, depending upon what you read, infer and interpret. Big data is largely seen as a mechanism of working on different types of structured and unstructured data.

For NetApp, we treat big data alongside large data as well. What I mean by that is, there are some large data aspects which are not necessarily in the purview of big data. It could be things like seismic applications. It's constantly getting a lot of data, media or video analytics, applications, camera. Those grow into very large data sets of common, or similar type of data. I call that large data. That normally does not come in the book definition of big data, which is by definition is supposed to have different velocity, different variety. This is the analyst's (IDC) definition of it - it has different velocity, which means it comes from different places at different speeds. It has different variety, which means it comes in text form, or graphical form, machine-generated form and so on and so forth. Big data by that definition for us, is big data as well as large data, and we do work on both these contexts very effectively.

Big data for us, we work a lot with the software vendors who give us the ability to manage this big data whether it's seismic solutions, media analytic solutions, or business analytics solutions like Hadoop and others. And to help that software applications manage the big data, we provide an underlying storage which is being fine-tuned for those applications.

Data is a multi product line organisation product today. We got a company called Engenio a few years ago, and the outcome of the acquisition today is a product line called E-series which is extremely dense and extremely high performance solutions. And that plays a lot into the big data space for us. That becomes a building block for all these application vendors. Whether it's Hadoop, seismic applications, or video analytics applications, they are built to leverage the performance which the E-series delivers to those applications. So we play a very big role in big data. The large data is an equally important aspect for us. Healthcare for example, is constantly working on abilities to store and retrieve large amounts of image data.

 

Previous Page  1  2  3  4  5  6  7  8  9  Next Page 

Sign up for Computerworld eNewsletters.