# Model management

## Model chaining

Everinfer allows to "chain" multiple ONNX graphs in a single pipeline. \
Use pipeline creation syntax in the following way...&#x20;

```python
client.register_pipeline('model_chaining_example', 
['model_1.onnx', 'model_2.onnx', ...., 'model_N.onnx'])
```

...to merge multiple models in a single graph. Outputs of each model will be used as inputs for the next model.&#x20;

{% hint style="warning" %}
Output names of the model must match input names of the next one!
{% endhint %}

This can be used in a multitude of ways, for example:

* Fuse pre- and post-processing in a single graph with the main model. Check out our [FasterRCNN example](https://docs.everinfer.ai/getting-started/faster-rcnn-example), showcasing that approach.&#x20;
* Do simple computations locally and offload demanding models to Everinfer. [Stable Diffusion example](https://docs.everinfer.ai/examples/stable-diffusion-decouple-gpu-ops-from-code) does exactly that, offloading U-Net model to remote GPUs.
* Deploy huge models, like Large Language Models, by splitting them into multiple graphs.&#x20;

{% hint style="info" %}
Got cool ideas and use cases for model chaining on Everinfer?&#x20;

Please, hit us up through <hello@everinfer.ai> , we will be glad to include them as examples and give credits to you!
{% endhint %}
