# Basics

## Getting started&#x20;

Installing the client is as simple as:

```python
pip install everinfer
```

If your machine can run Python3 — it can run Everinfer client.

{% hint style="info" %}
You will need a personal **API key** to use Everinfer. Reach out to us to get a demo key with some free compute power attached: <hello@everinfer.ai> \
\
We are very responsive, don't hesitate to do it regardless of your use case :)&#x20;
{% endhint %}

Define your API key and an ONNX model that you'd like to use:

```python
my_api_key = 'my_key' # your API key, get it from us!
my_onnx_path ='my.onnx' # path to your .onnx model 
```

## Hosting models

Import Everinfer `Client` to manage your pipelines and create inference engines. Authenticate with your API key:&#x20;

```python
from everinfer import Client 
client = Client(my_api_key)
```

The next step is to upload a model, assign it a name, and, optionally, define some metadata JSON-serializable dictionary:&#x20;

```python
pipeline = client.register_pipeline(
  "my_model",
  [my_onnx_path],
  meta={'description': 'this is my example model'}
)
```

Then, create an inference engine:

```python
engine = client.create_engine(pipeline['uuid'])
```

That is it! Now you are ready to run inference on remote GPUs.&#x20;

## Running inference&#x20;

`engine` accepts a list of Python dicts in the following format:

```python
tasks = [{'input_name': input_array_or_img_path}]
```

Where `'input_name'` has to match the input name defined for your ONNX graph, and input can be a `numpy` array or a path to `.jpg` or `.png` image.&#x20;

Types of input have to match types expected by ONNX graph. For example, ONNX files that are exported with `torch.FloatTensor()` as input expect `np.float32` inputs.&#x20;

Call the model:

```python
preds = engine.predict(tasks)
```

Now you have a general understanding of Everinfer flow and its simplicity.&#x20;

{% hint style="success" %}
[See a simple example in action on Google Colab](https://colab.research.google.com/drive/1tF8US1gLb-vHj5taO5ZEQTWS5v2K14yQ?usp=sharing)
{% endhint %}

Take a look at the next example, which covers everything you need to fire up your own, production-ready pipeline, including chaining ONNX graphs, optimizing pre-processing, and more.
