The value of data is equal to the information it helps generate. With increasing demand for data-driven insights, data volumes are growing at an exponential rate. This proliferation of data is propelling a challenge that enterprises must tackle in the context of digital transformation—data gravity.
What Is Data Gravity?
The term “data gravity” was first introduced into IT lingo more than a decade ago by software engineer David McCrory. It describes the concept that data and applications are attracted to each other—similar to the attraction between objects explained by the Law of Gravity. McCrory was highlighting the concept that the quantity and the speed at which services, applications, and customers are attracted to data increases as the mass of the data also increases.
What Are the Consequences of Data Gravity?
As enterprises continue their digital transformation journeys, their data footprints are growing at astonishing rates. In fact, it’s estimated that by 2024, Forbes Global 2000 enterprises will create data at a rate of 1.1 million gigabytes per second, requiring 15,635 exabytes of additional data storage annually. New data types and applications, like artificial intelligence (AI) and machine learning (ML), are only accelerating this rapid data growth.
All this data is certainly the key to unlocking boundless insights, informed decision-making, and innovation. But, it’s also becoming increasingly cumbersome and expensive to move around. As data sets grow larger, data becomes more immobile—with a gravitational-like force.
So, what’s the impact to the enterprise?
Enormous volumes of structured and unstructured data can create complicated operational challenges that hinder innovation, limit performance, and reduce productivity. There’s also the increased data storage costs to access all of this accumulating data. These costs can directly impede budget and business success.
Data gravity becomes even more of a critical challenge as enterprises mature in their data analytics practices. Moving data in different enterprise storage systems can be both costly and risky. And the complexities grow when you want to run analytics in the cloud on data stored in the enterprise—or vice-versa.
How Do You Overcome the Pull of Data Gravity?
Breaking free from the “pull” of data gravity requires bringing data as close as possible to applications and services. This requires moving away from 20th-century infrastructure and sprawling data silos to a single, scale-out storage platform that shrinks the time and distance between data sets being processed.
With a secure, hybrid data-centric architecture, enterprises can support a wide range of traditional and next-generation workloads and applications. Data can be managed in one place, bringing applications and processing power to the data. Traffic can be aggregated and maintained in both public and private clouds, at the core, at the edge, and from every point of business presence, dramatically reducing the effect of data gravity.
Pure Can Help
Pure Storage® offers a modern approach to data storage that delivers the scale, performance, and flexibility modern workloads and operations require. Here’s how Pure can help you overcome the challenges of data gravity:
Built-in data reduction: Pure Storage delivers the most granular and complete data reduction ratios in the flash storage industry.
Consolidation: Eliminate the complexities of data silos with a parallel, scalable data hub.
Data mobility: Get effortless data and application mobility across public clouds, private clouds, hybrid clouds, and on-premises infrastructure.
Consistent container management: Portworx® is the most complete Kubernetes data services platform.
Security and resilience: Safeguard data and keep your enterprise running with modern data protection and unparalleled disaster recovery and backup capabilities.
Scalability: Have the agility to grow, scale, and modernise without limits—on premises, in the cloud, or both.