WEKA Data Platform Overview
From training AI to accelerating life science research, all HPC workloads benefit from the WEKA Data Platform's simplicity. Unify your data access to one directory in an ultra-low latency and parallel NVMe file storage platform for all your data: on-premises, public cloud, or both.
WEKA Data Platform’s zero copy algorithm and zero tuning architecture deliver optimal performance, regardless of I/O, inode, and file size.
Virtualize hundreds of petabytes and store both cloud and on-premises data on a Global namespace complete with metadata. Data can be referenced using any protocol: S3, SMB, NFS, POSIX, or even GPU direct.
WEKA reinvents storage with a rebuilt distributed parallel file system with local snapshotting, automated tiering, backup redundancy, and more, increasing utilization, reducing complexity, and creating efficient data pipelines.
Burst data to all major cloud providers or run on-premises NVMe storage for compute flexibility. WEKA runs on commodity x86 hardware and is available by major cloud including Azure, AWS, GCP, OCI.
Supports structured and unstructured data, regardless of whether the data is at the core, cloud, or edge. It is multi-tenant, multi-workload, multi-performant, and multi-location, all with a common management interface.
Before and After WEKA Data Platform Deployment
WEKApod: Certified for NVIDIA DGX SuperPODâ„¢on NVIDIA DGX H100 Systems
The WEKApod Data Platform Appliance seamlessly integrates turnkey storage hardware and award-winning storage software with NVIDIA DGX for simplicity, performance, scalability, and efficiency.
Infinite Scalability
One 8-node WEKApod is capable of 720GB/s & 186 GB/s of sustained read & write with infinite and linear scalability.
Sustainable Efficiency
10-50x better AI/ML efficiency and reduced infrastructure footprint by 4-7x via data copy reduction and cloud elasticity.
Integration Simplicity
Fully supported on NVIDIA Base Command Manager and orchestration tools like Run:AI for single pane-of-glass management.
On-Prem & Cloud
Seamlessly connect on-premises AI workloads with GPU cloud environments for hybridized workflow and backup archival.
Interested?