Informatica Intros Its Planned Intelligent Data Platform
Hoping to improve
integration, data integration software vendor Informatica has announced its Intelligent Data Platform. The company said the next-gen platform is designed to provide "the right data at the right time."
The platform was unveiled at Informatica World 2014, currently taking place in Las Vegas. According to the company's Web site, this platform "will deliver data that can organize itself and provide intelligent guidance on how it's used."
The platform is not yet a completed product, but is presented as a vision that is being developed as "a combination of existing Informatica platform capabilities and new product initiatives, some of which are in early beta testing." The company said some platform capabilities will be available as packaged offerings and reference architecture by the end of this year.
There are three components to the platform. A data intelligence layer delivers self-service data for businesses, collecting metadata, semantic data and usage information. It also analyzes the metadata and makes recommendations that help users make decisions. A second component, the data infrastructure, offers clean and connected data. And a data engine, such as Informatica's Vibe, aggregates and manages data.
The company laid out three use cases for its platform. In the self-service use case, which it calls Springbok, a user can find useful data without the extensive use of IT -- which is a commonly intended scenario for many business intelligence systems these days. IT, in this use case, is assured that the data is clean and connected, and the system will guide users through ways to find and use the data they need.
In the data-centric use case, called Secure@Source, an application built on the platform complements existing security operations, but it also finds all instances of sensitive data, visualizes the risk and maps the risk so the data can be secured. Functions can include creation of a "data risk heat map," monitoring in real-time of data usage patterns, and based on a data risk index that is tied to compliance regulations and data policies.
A "managed data lake" use case is when a persistent layer collects, refines and shares data that data analysts need. Big data could be on-boarded quickly, the company said, because of automatic discovering of the structure and formatting. A data refinery would clean and refine the data through stages.
Charles King, an analyst with industry research firm Pund-IT, told us that Informatica's Intelligent Data Platform "is being positioned as a complementary service" for existing capabilities, rather than as a replacement.
King noted that has similarly talked about a "data lake" concept, where all the data -- structured, unstructured, or semi-structured -- is stored in a common pool.
He added that, especially in companies that allow "bring your own device" access to data, "the location of where the data is stored can be fragmented," and it's sometimes accessible by people who don't have the clearance. Informatica's idea, he said, is to "create an overarching solution to get a handle on where the data is."