2017 big data trends

Jonah Kim, Product Manager, APAC, Tableau

3. The convergence of IoT, cloud, and big data create new opportunities
It seems that everything in 2017 will have a sensor that sends information back to the mothership. In smart cities and nations, like Singapore, analysts have commented that products from the IoT sector will continue to feature. A year ago, Frost and Sullivan also projected that the number of connected devices will increase to 50 billion units globally in five years; this is equivalent to each person having ten connected devices.

Across the region, IoT is generating massive volumes of structured and unstructured data, and an increasing share of this data is being deployed on cloud services. The data is often heterogeneous and lives across multiple relational and non-relational systems, from Hadoopclusters to NoSQL databases. While innovations in storage and managed services have sped up the capture process, accessing and understanding the data itself still pose a significant last-mile challenge.

As a result, demand is growing for analytical tools that seamlessly connect to and combine a wide variety of cloud-hosted data sources. Such tools enable businesses to explore and visualise any type of data stored anywhere, helping them discover hidden opportunity in their IoT investment.

4. Self-service data prep becomes mainstream 
Making Hadoop data accessible to business users is one of the biggest challenges of our time. The rise of self-service analytics platforms has improved this journey. At the beginning of 2016, IDC predicted that spending on self-service visual discovery and data preparation will grow more than twice as fast as traditional IT-controlled tools for similar functionality (through till 2020).

Now, business users want to further reduce the time and complexity of preparing data for analysis, which is especially important when dealing with a variety of data types and formats.

Agile self-service data-prep tools not only allow Hadoop data to be prepped at the source but also make the data available as snapshots for faster and easier exploration.

5. Big data grows up: Hadoop adds to enterprise standards
We're seeing a growing trend of Hadoop becoming a core part of the enterprise IT landscape. And in 2017, we'll see more investments in the security and governance components surrounding enterprise systems. Apache Sentry provides a system for enforcing fine-grained, role-based authorisation to data and metadata stored on a Hadoop cluster. Apache Atlas, created as part of the data governance initiative, empowers organisations to apply consistent data classification across the data ecosystem. Apache Ranger provides centralised security administration for Hadoop.

These capabilities are moving to the forefront of emerging big-data technologies, thereby eliminating yet another barrier to enterprise adoption.

6. Rise of metadata catalogs finds analysis-worthy big data
For a long time, companies threw away data because they had too much to process. With Hadoop, they can process lots of data, but the data isn't generally organised in a way that can be found.

Metadata catalogs can help users discover and understand relevant data worth analysing using self-service tools. This gap in customer need is being filled by companies like Alation and Waterline which use machine learning to automate the work of finding data in Hadoop. They catalog files using tags, uncover relationships between data assets, and even provide query suggestions via searchable UIs. This helps both data consumers and data stewards reduce the time it takes to trust, find, and accurately query the data. In 2017, we'll see more awareness and demand for self-service discovery, which will grow as a natural extension of self-service analytics.

Previous Page  1  2