Evidently Iris Demo#

In this notebook, we’ll import the hub’s Evidently demo app, which monitors data quality and drift on Scikit-Learn’s Iris dataset. We’ll run it using the evaluate() method with a slightly modified dataset as the monitored data.

The Evidently Iris module demonstrates a simple example of integrating MLRun with Evidently for data monitoring, which you can adapt to fit your own project needs or use as a reference implementation.

Set up an MLRun project and prepare the data#

import mlrun
project = mlrun.get_or_create_project("evidently-demo",'./evidently-demo')
from sklearn.datasets import load_iris
import pandas as pd
from mlrun.feature_store.api import norm_column_name

iris = load_iris()
columns = [norm_column_name(col) for col in iris.feature_names]
current_df = pd.DataFrame(iris.data, columns=columns)
current_df["sepal_length_cm"] += 0.3 # simulate drift

Get the module from the hub and edit its defaults#

hub_mod = mlrun.get_hub_module("hub://evidently_iris", download_files=True)
src_file_path = hub_mod.get_module_file_path()

We need to modify the class defaults to include the Evidently workspace path and project ID parameters. This can be done in one of two ways: either by editing the downloaded source file directly and then evaluating with the standard class, or - as we’ll do now - by adding an inheriting class to the same file and evaluating using that new class.

(Note: this is only needed when runnning the app using evaluate(). When setting it as a real-time function we can simply pass the parameters).

from pathlib import Path
import uuid

ws = Path("./evidently_workspace")
ws.mkdir(parents=True, exist_ok=True)  # will create if missing
evidently_project_id = str(uuid.uuid4())

wrapper_code = f"""
class EvidentlyIrisMonitoringAppWithWorkspaceSet(EvidentlyIrisMonitoringApp):
    def __init__(self) -> None:
        super().__init__(evidently_workspace_path="{ws}", evidently_project_id="{evidently_project_id}")
        """

with open(src_file_path, "a") as f:
    f.write(wrapper_code)

Now we can actually import it as a module, using the module() method

app_module = hub_mod.module()
evidently_app = app_module.EvidentlyIrisMonitoringAppWithWorkspaceSet

Run the app#

We are ready to call evaluate() (notice that the run is linked to the current (active) project that we created at the beggining of the notebook)

# Evaluate directly on the sample data
run_result = evidently_app.evaluate(
    func_path=hub_mod.get_module_file_path(),
    sample_data=current_df,
    run_local=True)
> 2025-11-17 09:14:43,241 [info] Changing function name - adding `"-batch"` suffix: {"func_name":"evidentlyirismonitoringappwithworkspaceset-batch"}
> 2025-11-17 09:14:43,580 [info] Storing function: {"db":"http://mlrun-api:8080","name":"evidentlyirismonitoringappwithworkspaceset-batch--handler","uid":"9ecf72a1bd82498c92d5897809b6a438"}
> 2025-11-17 09:14:43,856 [info] downloading v3io:///projects/evidently-demo/artifacts/evidentlyirismonitoringappwithworkspaceset-batch_sample_data.parquet to local temp file
> 2025-11-17 09:14:43,890 [info] Running evidently app
> 2025-11-17 09:14:46,214 [info] Logged evidently object
project uid iter start end state kind name labels inputs parameters results artifact_uris
evidently-demo 0 Nov 17 09:14:43 NaT completed run evidentlyirismonitoringappwithworkspaceset-batch--handler
v3io_user=iguazio
kind=local
owner=iguazio
host=jupyter-97c64f97b-8qtcv
sample_data
write_output=False
existing_data_handling=fail_on_overlap
stream_profile=None
return={result_name: 'data_drift_test', result_value: 0.5, result_kind: 0, result_status: 1, result_extra_data: '{}'}
evidently_report=store://artifacts/evidently-demo/evidentlyirismonitoringappwithworkspaceset-batch--handler_evidently_report#0@9ecf72a1bd82498c92d5897809b6a438^2f82c069b396f23b4daae81540ffa386b44f165c

> to track results use the .show() or .logs() methods or click here to open in UI
> 2025-11-17 09:14:46,354 [info] Run execution finished: {"name":"evidentlyirismonitoringappwithworkspaceset-batch--handler","status":"completed"}

Examine the results#

Notice that the 0.5 value in the demo run result is not derived from Evidently’s drift metrics, but is a constant placeholder added for demonstration only.

Let’s take a look at the artifact the app generated for us:

artifact_key = f"{run_result.metadata.name}_evidently_report"
artifact = project.get_artifact(artifact_key)
artifact.to_dataitem().show()