Graphcore Cloud Services for Natural Language Processing

Performance Benchmarks

Graphcore Colossus MK2 IPUs and Poplar SDK accelerate machine learning training and inference with high-performance optimizations.

Graphcore Colossus MK2 Performance Benchmarks

Graphcore's Poplar SDK accelerates machine learning training and inference with high-performance optimizations delivering world leading performance on IPUs across models such as  natural language processing,  probabilistic modeling, computer vision and more.

We have provided a selection of the latest MK2 IPU performance benchmark charts from Graphcore below. You can reproduce all of these benchmarks using code in the examples repo on the Graphcore GitHub page.

Benchmark Results Table

Graphcore also provides their detailed MK2 training and inference performance data in table format.

VIEW RESULTS

Natural Language Processing

BERT Large (Bidirectional Encoder Representations from Transformers) is one of the most well known NLP models in use today. The IPU accelerates both training and inference on BERT-Large, delivering faster time to train with significantly higher throughput at extremely low latency for inference.

BERT-Large: Inference

BERT-Large: Inference

Get the code

BERT-Large: Training

BERT-Large: Training

Get the code

Computer Vision

IPUs excel with models designed to leverage small group convolutions due to its fine grained architecture and unique Poplar features. Graphcloud can deliver unparalleled performance for both training and inference for newer computer vision models like EfficientNet and ResNeXt, which deliver higher accuracy and improved efficiency, as well as for traditional computer vision models such as ResNet-50.

EfficientNet-B0: Inference

EfficientNet-B0: Inference

TensorFlow Code     PyTorch Code

EfficientNet-B4: Training

EfficientNet-B4: Training

TensorFlow Code     PyTorch Code

ResNeXt-101: Inference

Resnext-101: Inference

Get the code

ResNeXt-101: Training

Resnext-101: Training

Get the code

ResNet-50: Inference

Resnet-50: Inference

TensorFlow Code     PyTorch Code

ResNet-50: Training

Resnet-50: Training

TensorFlow Code     PyTorch Code

Probabilistic Modeling

Probabilistic models using the Markov Chain Monte Carlo (MCMC) method use iterative sampling of an implicit distribution with Hamiltonian Monte Carlo (HMC) schemes to manage noise and uncertainty in data. Graphcloud, using Graphcore IPU, delivers faster time to result for MCMC using standard TensorFlow Probability.

Get the code

MCMC Probabilistic Model : Training

MCMC: Training

Time Series Analysis

The IPU is well suited to time series analysis applications. Here, an LSTM inference model shows lower latency and considerably higher throughput than the latest GPU.

Get the code

LTSM: Inference

Time Series: Inference

Speech Processing

Deep Voice from Baidu is a prominent text-to-speech (TTS) model family for high-quality, end-to-end speech synthesis. The IPU’s capacity to rapidly accelerate fully convolutional TTS models like Deep Voice 3 with a notably higher throughput than a GPU opens up the opportunity to create entirely new classes of TTS models.

Get the code

DeepVoice-3: Training

DeepVoice 3: Training

Download Performance Benchmarks

The above performance benchmark information is available as a compact PDF from Graphcore for offline viewing. Just click the button below for access.

DOWNLOAD

Access Graphcloud

Sign-up to access Graphcloud and experience scale-out performance of up to 64 Graphcore Colossus MK2 IPUs as a secure monthly cloud service.

SIGN UP