mlflow_utils package#

Submodules#

mlflow_utils.mlflow_utils module#

class mlflow_utils.mlflow_utils.MLFlowModelServer(context=None, name: str | None = None, model_path: str | None = None, model=None, protocol=None, input_path: str | None = None, result_path: str | None = None, shard_by_endpoint: bool | None = None, **kwargs)[source]#

Bases: V2ModelServer

MLFlow tracker Model serving class, inheriting the V2ModelServer class for being initialized automatically by the model server and be able to run locally as part of a nuclio serverless function, or as part of a real-time pipeline.

load()[source]#

loads a model that was logged by the MLFlow tracker model

predict(request: Dict[str, Any]) list[source]#

Infer the inputs through the model. The inferred data will be read from the “inputs” key of the request.

Parameters:

request – The request to the model using xgboost’s predict. The input to the model will be read from the “inputs” key.

Returns:

The model’s prediction on the given input.

Module contents#