BLOG: Big data demands more than commodity hardware

Dexter Henderson

In the era of cheap and powerful commodity servers, it may seem odd that some believe the era of big data may require moving on from the PC world — and into new computing platforms that are better suited to deal with big data's specific needs.

Dexter Henderson is Vice President and Business Line Executive for Power Systems at IBM. His essay in this week's New Tech Forum explains why he believes that commodity servers aren't up to snuff for big data, and enterprise-grade servers are the way forward. — Paul Venezia

Embracing big data means leaving old technology behind
As organizations struggle to keep pace with the changes wrought by big data, choosing the right server technology becomes ever more important. Big data is spurring an evolution of complex analytics and cognitive computing that require architectures that can do multiple tasks simultaneously, efficiently, and affordably. Servers that are built from the ground up with big data in mind are better equipped to handle new workloads than servers not optimized for the task.

The Internet of things, where intelligent devices equipped with sensors collect and transmit gobs of data, is forcing enterprises to chart their course from big data to big insights to gain competitive advantage. This journey includes supporting terabytes of streaming data sets from a variety of devices — and analyzing those oceans of data in the context of domain knowledge in real time.

The four big data activities of gather, connect, reason, and adapt will be the keys to driving business value in the next decade, as organizations recognize the strategic importance of big data insights.

As the big data trend accelerates, complex analytics workloads will become increasingly common in both large and midsize businesses. In a survey done by Gabriel Consulting Group, big data users were asked what type of workloads they were using. Not surprisingly, MapReduce workloads ranked dead last — with enterprise analytics, complex event processing, visualization, and data mining ranking higher.

A number of organizations have been trying to address emerging big data workloads with static data analysis models and multimachine server architectures with low throughput and high latency. What organizations really need is a new software and hardware environment that takes into consideration the new nature of these workloads, as well as data scale, and supports high compute intensity, data parallelism, data pipelining, and real-time analytic processing of data in motion.

Enterprise analytics/big data workloads are becoming increasingly compute-intensive, sharing common ground with scientific and technical computing applications. The amount of data and processing involved requires these workloads to use clusters of small systems running highly parallel code in order to handle the workload at a reasonable cost and timeframe.

1  2  Next Page