BLOG: Big data demands more than commodity hardware

Dexter Henderson

Some believe that high levels of software customization in a distributed server environment is the answer to the problem. Instead, this often leads to wasted and unused resources, built-in inefficiencies, energy and floor space concerns, security issues, high software license costs, and maintenance nightmares.

Enterprise-grade servers that are well suited for modern big data analytics workloads have:

  • Higher compute intensity (high ratio of operations to I/O)
  • Increased parallel processing capabilities
  • Increased VMs per core
  • Advanced virtualization capabilities
  • Modular systems design
  • Elastic scaling capacity
  • Enhancements for security and compliance and hardware-assisted encryption
  • Increased memory and processor utilization

Superior, enterprise-grade servers also offer a built-in resiliency that comes from integration and optimization across the full stack of hardware, firmware, hypervisor, operating system, databases, and middleware. These systems are often designed, built, tuned, and supported together -- and are easier to scale and manage.

For example, many large financial institutions have embarked on aggressive programs to use predictive analytics technology to enhance their revenues. This is placing greater demand on existing compute resources. Using an enterprise-grade server helps these institutions to run thousands of tasks in parallel to deliver analytics services faster, as well as create a virtualized environment that improves server utilization and shares server resources across business units. Server consolidation and virtualization helps reduce the number of physical servers, saving data center space and yielding savings through reduced power and cooling, hardware maintenance, software licensing, and management costs.

To lay it out in more technical terms, there are three important computing requirements for big data workloads:

  • Advanced big data analytics require a highly scalable system with extreme parallel processing capability and dense, modular packaging. A compute system with more memory, bandwidth, and throughput can run multiple tasks simultaneously, respond to millions of events per second, and parallel process advanced analytics algorithms in matter of seconds.
  • Big data needs a computing system that is reliable and resilient and is able to absorb temporary increases in demand without failure or changes in architecture. This limits security breaches and enhances workload performance with little or no downtime.
  • To support new big data workloads, computing systems must be built with open source technologies and support open innovation. Open source architecture allows more interoperability and flexibility and simplifies management of new workloads through advanced virtualization and cloud solutions.

Big data is a new, extraordinary resource to help companies gain competitive advantage. Applying real-time analytics to big data enables companies serve customers better, identify new revenue potential, and make lightning-quick decisions based on market insights. For companies to capitalize on the real-world business benefits of big data, they must first let go of their love for older technologies and look to newer, optimized alternatives.

Source: InfoWorld

Previous Page  1  2