Qumulo's ANQ achieves industry-leading benchmark results
June 17, 2024

Qumulo's ANQ achieves industry-leading benchmark results

SEATTLE — Qumulo (www.qumulo.com), which offers a simple way to manage exabyte-scale data anywhere, announced a fast and cost-effective cloud-native storage solution that tops the industry, based on the SPECstorage Solution 2020 AI_IMAGE Benchmark. Qumulo’s Azure Native Qumulo (ANQ) achieved an overall response time (ORT) of 0.84ms, with a total customer cost of just $400 for a five-hour burst period.

Deploying cost-effective AI training infrastructure in the public cloud requires transferring data from inexpensive and scalable object storage to limited and expensive file caches. This results in added complexity and up to 40 percent GPU idle time as data is being staged from object storage into local file caches. ANQ acts as an intelligent data accelerator for the object store, executing parallelized, pre-fetched reads served directly from the Azure primitive infrastructure via the Qumulo filesystem to GPUs running AI training models. This architecture improves GPU-side performance by accelerating load times between the object layer and the filesystem, transforming how file-dependent AI training in the cloud should be architected.

The test achieved an ORT of 0.84ms at 700 jobs — the fastest benchmark of its kind run on Microsoft Azure infrastructure. Utilizing a SaaS PayGo model, metering stops when performance isn’t needed, resulting in a cost of ~$400 list pricing to run the benchmark.

There are three aspects that set ANQ apart from other cloud-native file solutions offering AI solutions for their customers. First, it offers true elastic scalability. Storage performance scales with the AI application stack demands, saving costs when there is no demand. Unlike other cloud file systems, ANQ operates without pre-provisioned volumes. Second, Qumulo passes cloud economics savings directly to customers. The pricing model is based on actual storage usage (GB) and performance needed (throughput and IOPs), without the need for pre-provisioned capacity. Lastly, there is linear performance scaling. ANQ’s architecture ensures that performance increases linearly as workloads increase. With an average cache-hit ratio higher than 95 percent, ANQ accelerates GPU-side scalability and performance, bypassing load times between the object layer and the filesystem.