The rise of the internet of things (IoT) has led to an increase in the volume of data that must be managed across fleets of distributed devices.
Instead of waiting for IoT data to be transferred and processed remotely at a centralized location such as a data centre, edge computing is a distributed computing topology where information is processed locally at the “edge”: the intersection between people and devices where new data is created.
Edge computing doesn’t just save businesses money and bandwidth, it also allows them to develop more efficient, real-time applications that offer a superior user experience to their customers. This trend is only going to accelerate in the coming years with the rollout of new wireless technologies such as 5G.
As more and more devices are connected to the internet, the amount of data that must be processed in real time and on the edge is going to increase. So how do you provide data storage that is distributed and agile enough to meet the increasing data storage demands of edge computing? The short answer is container-native data storage.
When we look at existing edge platforms such as AWS Snowball, Microsoft Azure Stack, and Google Anthos, we see that they are all based on Kubernetes, a popular container orchestration platform. Kubernetes enables these environments to run workloads for data ingestion, storage, processing, analytics, and machine learning at the edge.
A multi-node Kubernetes cluster running at the edge needs an efficient, container-native storage engine that caters to the specific needs of data-centric workloads. In other words, containerized applications running on the edge require container-granular storage management. Portworx® is a data services platform that provides a stateful fabric for managing data volumes that are container-SLA-aware.
Learn more about the relationship between Big Data and IoT.