# All Presets

Evidently has several pre-built reports and test suites. We call them **Presets**. Each preset evaluates or tests a particular aspect of the data or model quality.

This page links to the **description** of each preset. To see the code and interactive examples, head to [example notebooks](https://francesco.gitbook.io/docs.evidentlyai.com/examples) instead.

## Metric Presets

Metric presets are **pre-built reports** that help with visual exploration, debugging and documentation of the data and model performance. You can also use them to calculate and log metrics as JSON or Python dictionary.

|                                                                                                                                                                                                               |                                                                                                                                                                                                    |                                                                                                                                                                                                                              |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| <p><a href="data-quality"><strong>Data Quality</strong></a><br><br>Shows the dataset statistics and feature behavior.<br><br><strong>Requirements</strong>: model inputs.</p>                                 | <p><a href="data-drift"><strong>Data Drift</strong></a><br><br>Explores the distribution shift in the model features.<br><br><strong>Requirements</strong>: model inputs, a reference dataset.</p> | <p><a href="target-drift"><strong>Target Drift</strong></a><br><br>Explores the distribution shift in the model predictions.<br><br><strong>Requirements:</strong> model predictions and/or target, a reference dataset.</p> |
| <p><a href="class-performance"><strong>Classification</strong></a><br><br>Evaluates the classification model quality and errors.<br><br><strong>Requirements</strong>: model predictions and true labels.</p> | <p><a href="reg-performance"><strong>Regression</strong></a><br><br>Evaluates the regression model quality and errors.<br><br><strong>Requirements</strong>: model predictions and actuals.</p>    | <p><a href="text-overview"><strong>Text Overview</strong></a><br><br>Evaluates text data drift and descriptive statistics.<br><br><strong>Requirements</strong>: model inputs (raw text data)</p>                            |

## Test Presets

Test presets are **pre-built test suites** that perform structured data and model checks as part of the pipeline.

|                                                                                                                                                                                                                                                                                 |                                                                                                                                                                                                                                                                            |                                                                                                                                                                                                                                                                                           |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| <p><a href="no-target-performance"><strong>NoTargetPerformance</strong></a><br><br>Tests the model performance without ground truth or actuals.<br><br><strong>Requirements</strong>: model inputs, predictions, a reference dataset.</p>                                       | <p><a href="../data-quality#data-quality-test-suite"><strong>Data Quality</strong></a><br><br>Tests if the data quality is suitable for (re)training. Checks nulls, duplicates, etc.<br><br><strong>Requirements</strong>: model inputs.</p>                               | <p><a href="../data-quality#data-stability-test-suite"><strong>Data Stability</strong></a><br><br>Tests if a data batch is similar to reference. Checks schema, data ranges, etc.<br><br><strong>Requirements</strong>: inputs, a reference dataset.</p>                                  |
| <p><a href="../data-drift#data-drift-test-suite"><strong>Data Drift</strong></a><br><br>Tests for distribution drift per column and overall dataset drift.<br><br><strong>Requirements</strong>: model inputs, a reference dataset.</p>                                         | <p><a href="../reg-performance#regression-performance-test-suite"><strong>Regression</strong></a><br><br>Tests the performance of the regression model against expectation.<br><br><strong>Requirements</strong>: model predictions and actuals.</p>                       | <p><a href="../class-performance#classification-performance-test-suite"><strong>Multi-class Classification</strong></a><br><br>Tests the performance of a multi-class classification model against expectation.<br><br><strong>Requirements</strong>: model predictions, true labels.</p> |
| <p><a href="../class-performance#classification-performance-test-suite"><strong>Binary Classification</strong></a><br><br>Tests the performance of a binary classification model against expectation.<br><br><strong>Requirements</strong>: model predictions, true labels.</p> | <p><a href="../class-performance#classification-performance-test-suite"><strong>Binary Classification top-K</strong></a><br><br>Tests the performance of a binary classification model at top-K.<br><br><strong>Requirements</strong>: model predictions, true labels.</p> |                                                                                                                                                                                                                                                                                           |

## Individual Metrics and Tests

You can also create custom test suites and reports from individual metrics and tests. You can explore 100+ [available tests](https://francesco.gitbook.io/docs.evidentlyai.com/reference/all-tests) and [metrics](https://francesco.gitbook.io/docs.evidentlyai.com/reference/all-metrics).
