BERT With Zero Overhead
Run BERT on remote machines with 1ms overhead.
In this tutorial we're going to demonstrate how you can deploy one of the fastest NLP models, BERT, to remote GPUs and maintain decent latency.
Why BERT?
BERT encoder is extremely fast, running at 1.5ms on local GPU (tested on Nvidia T4). Deploying that model to remote machines and maintaining low latency is hard.
Everinfer is highly optimized and will allow you to run that model on remote machines while keeping up with its speed.
How to deploy BERT on Everinfer
Install Everinfer and HuggingFace transformers library.
Convert the model to ONNX format:
Authenticate on Everinfer using your API key, upload the model, and create inference engine:
You are ready to go!
Since HuggingFace tokenizers are fully compatible with Everinfer expected input format, you can feed tokenizer outputs directly to the deployed model:
After applying tokenizer to input text, running the model is as simple as:
Performance
Remote GPU access overhead is virtually zero!
You could deploy that code to AWS Lambda to go fully serverless, or use it as a part of your self-hosted web app.
Last updated