Graphcore Cloud Services and Graphcore Servers

Poplar® Software Stack

The world's first graph tool chain specifically designed for machine intelligence, built hand in hand with the Graphcore Intelligence Processing Unit.

Introducing Graphcore® Poplar® SDK Software

The Poplar SDK is a comprehensive software stack which was developed alongside the IPU to enable innovators to directly access and benefit from it. Poplar makes the management of IPUs at scale as simple as programming a single device, allowing the user to focus on the data and the results.

A state-of-the-art compiler simplifies IPU programming by handling all the scheduling and work partitioning of large models, including memory control. The Graph Engine builds the runtime to execute workloads efficiently across all available IPU processors, blades and Pods.

Along with running large models across sizeable IPU-based systems, it is possible to dynamically share workloads via the Virtual IPU software. While thousands of Bow-2000 machines in the system can work together on large model training, simultaneously the remaining machines can be allocated for inference and production deployment.

Poplar Framework Diagram

Multi-IPU Scaling & Communication

Poplar takes on the heavy lifting, so you don't have to, in a world of growing model sizes and complexity:

  • High bandwidth IPU-Link™ communication, fully automated and managed by Poplar, treats multiple IPUs like a single IPU compute resource
  • Graph Compile Domain (GCD) allows a single application to be programmed against multiple IPU processors, enabling both data parallel and model parallel execution
  • Model sharding allows the simple splitting of applications across multiple devices
  • Combining sharding with replication allows you to take code data parallel with minimum effort
  • Advanced model pipelining lets users extract maximum system performance to run large models fast and efficiently
Poplar Scaling Diagram
Frameworks Supported

Framework Support

Poplar seamlessly integrates with standard machine intelligence frameworks:

  • TensorFlow 1 & 2 support with full performant integration with TensorFlow XLA backend
  • PyTorch support for targeting IPU using the PyTorch ATEN backend
  • PopART™ (Poplar Advanced Runtime) for training & inference; supports Python/C++ model building plus ONNX model input
  • Full support for PaddlePaddle with other frameworks coming soon.

PopLibs™ Graph Libraries

PopLibs is a complete set of libraries, available as open source code, that support common machine learning primitives and building blocks:

  • Over 50 optimised functions for common machine learning models
  • More than 750 high performance compute elements
  • Simple C++ graph building API
  • Implement any application
  • Full control flow support
Graphcore PopLibs
Poplar Framework Diagram

Graph Compiler

Graphcore's state of the art compiler simplifies IPU programming by handling the scheduling and work partitioning of large parallel programs including memory control:

  • Optimized execution of the entire application model to run efficiently on IPU platforms
  • Alleviates the burden on developers to manage data or model parallelism
  • Code generation using standard LLVM

Graph Engine

High performance Graph Runtime to execute models and stream data through models running on IPU:

  • Highly optimized IPU data movement
  • Interfaces to host memory system
  • Device management: configuring the IPU-Link network, loading applications to devices & performing setup
  • Debug and profiling capabilities
Graph Engine

Additional Resources

Why Graphcore in the Cloud?

Get our eBook to learn about the benefits of using Graphcore Intelligence Processing Units in the cloud.

Get Our eBook

Poplar Analyst Report

Detailed technical white paper on the Poplar software stack from analyst Moor Insights & Strategy.

Read Analyst Report

Access Graphcloud

Sign-up to access Graphcloud and experience scale-out performance of up to 1,024 Graphcore Bow IPUs as a secure monthly cloud service.

Request Access