# Monitoring overview

## How it works

To deploy an ML monitoring system, you need 3 components:

1. **Logging**. You must use the Evidently Python library to capture JSON `snapshots`. Each `snapshot` summarizes data or ML model quality for a specific period. You can instrument your production ML pipelines, collect data from ML services, or write logs ad hoc. The logging functionality is powered by Evidently `Reports` and `Test Suites` (which you can also use separately). You control what exactly you log.
2. **Snapshot storage**. You must store JSON `snapshots` in a directory accessible by UI service.
3. **Monitoring dashboard**. To view metrics and test results over time, you must launch an Evidently Monitoring service. It will parse the contents of the `snapshots` and allow visualizing the logged metrics or test results in the UI. You can choose which values to visualize and how.

## What you can monitor

You can pick from 100+ in-built metrics and tests, including:

* The number of predictions made.
* Descriptive statistics of the dataset (feature ranges, column types, etc.)
* Data quality and integrity metrics for dataset and individual columns (nulls, range violations, new categorical values, etc.)
* Data and prediction drift checks for tabular, text data and embeddings (statistical tests, distance metrics).
* Model quality metrics for classification and regression (precision, recall, MAE, etc.)
* Text descriptors (length, sentiment, regular expression matches, etc.)
* Custom user-defined metrics

## Example scenario

Say, you have a batch model and score new data once per day. Every time you generate the predictions, you can capture a `snapshot` with the input dataset summary, data quality metrics, and prediction drift checks.

Once you get the labels that arrive with a delay, you can compute the true model performance and log a snapshot with model quality metrics to update the performance dashboard.

![](https://3833155839-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FQkv2FmdD03Bpw5VKJjIF%2Fuploads%2Fgit-blob-a8c62d2f586e49888a102c8adfcbfc9139620a41%2Fmonitoring_batch_workflow_min.png?alt=media)

For near real-time monitoring, you can run the **Evidently collector service** and POST data from your ML service. Evidently will compute metrics over batches of data based on the defined configuration.

![](https://3833155839-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FQkv2FmdD03Bpw5VKJjIF%2Fuploads%2Fgit-blob-a9a56351bf42412868c87d7a41e6ec74da358ecc%2Fmonitoring_collector_min.png?alt=media)

## User guide

To start, we recommend going through the [ML Monitoring Quickstart](https://docs.evidentlyai.com/get-started/tutorial-monitoring) tutorial. It shows a simple end-to-end example.

This user guide focuses on each component in-depth:

* [Add workspace and project](https://francesco.gitbook.io/docs.evidentlyai.com/user-guide/monitoring/workspace_project)
* [Log snapshots](https://francesco.gitbook.io/docs.evidentlyai.com/user-guide/monitoring/snapshots)
* [Design dashboard](https://francesco.gitbook.io/docs.evidentlyai.com/user-guide/monitoring/design_dashboard)
* [Monitoring UI](https://francesco.gitbook.io/docs.evidentlyai.com/user-guide/monitoring/monitoring_ui)
* [Collector service](https://francesco.gitbook.io/docs.evidentlyai.com/user-guide/monitoring/collector_service)

## Cloud ML Monitoring

Don't want to self-host? Sign up for the [Evidently Cloud Waitlist](https://www.evidentlyai.com/product/cloud)!
