NLP Performance
The IPU delivers impressive performance for NLP, including BERT-Large training, cutting hours from AI development cycles.
Best for Computer Vision
Graphcore delivers significant performance advantages for AI applications, and running EfficientNet on the IPU requires extra INT8 quantization.
Standard Warranty
All Graphcore powered systems come with a standard 1 year warranty and support, and offered with an extended 3 year option.
Explore Graphcore Powered Products
BOW-2000
1.4 PetaFLOPS Compute system
For Proof of Concept using IPU
Designed for AI training & inference
Systems can grow to supercomputing scale
BOW-POD16
Pre-configured 5.6 PetaFLOPS AI system
Dedicated, Powerful AI Compute
Turnkey, ready for installation
Extensive documentation and support
BOW-POD64
22.4 PetaFLOPS of AI training or inference
Simple, Powerful Built in Networking
For production deployment workloads
Simplified data center integration
BOW POD: Graphcore 3rd Generation IPU Systems
- Utilizing newly developed Wafer-on-Wafer(WoW) 3D stacking IPUs for unprecedented performance gains
- Up to 358 PetaFLOPS of AI processing power with BOW-POD1024
- BOW-2000 retains a 1U blade form factor, compatible with existing Graphcore IPU-POD systems
Major IPU Innovation: BOW POD vs. IPU-PODCLASSIC
- WoW IPUs have up to 40% increase in AI computing and 16% increased power efficiency compared to previous generation IPUs
- Recognition Model and Inference AI Training excels with BOW POD systems delivering higher throughput and faster train times
- Developed by TSMC and Graphcore, the BOW POD’s WoW IPUs are specifically designed for AI Learning
Poplar® Graph Framework Software
The Poplar SDK is a complete software stack, which was co-designed from scratch with the IPU, to implement our graph toolchain in an easy-to-use and flexible software development environment.
Standard Framework Support
Poplar seamlessly integrates with standard machine intelligence frameworks:
- TensorFlow 1 & 2 support with full performant integration with TensorFlow XLA backend
- PyTorch support for targeting IPU using the PyTorch ATEN backend
- PopARTâ„¢ (Poplar Advanced Runtime) for training & inference; supports Python/C++ model building plus ONNX model input
- Full support for PaddlePaddle and other frameworks is coming soon
Build your ideal system
Need a bit of help? Contact our sales engineers directly.
Explore More Solutions from Exxact
IPU Acceleration Across Industries