onnx_utils package#

Submodules#

onnx_utils.onnx_utils module#

onnx_utils.onnx_utils.optimize(context: MLClientCtx, model_path: str, handler_init_kwargs: dict | None = None, optimizations: List[str] | None = None, fixed_point: bool = False, optimized_model_name: str | None = None)[source]#

Optimize the given ONNX model.

Parameters:
  • context – The MLRun function execution context.

  • model_path – Path to the ONNX model object.

  • handler_init_kwargs – Keyword arguments to pass to the ONNXModelHandler init method preloading.

  • optimizations – List of possible optimizations. To see what optimizations are available, pass “help”. If None, all the optimizations will be used. Defaulted to None.

  • fixed_point – Optimize the weights using fixed point. Defaulted to False.

  • optimized_model_name – The name of the optimized model. If None, the original model will be overridden. Defaulted to None.

onnx_utils.onnx_utils.to_onnx(context: MLClientCtx, model_path: str, load_model_kwargs: dict | None = None, onnx_model_name: str | None = None, optimize_model: bool = True, framework_kwargs: Dict[str, Any] | None = None)[source]#

Convert the given model to an ONNX model.

Parameters:
  • context – The MLRun function execution context

  • model_path – The model path store object.

  • load_model_kwargs – Keyword arguments to pass to the AutoMLRun.load_model method.

  • onnx_model_name – The name to use to log the converted ONNX model. If not given, the given model_name will be used with an additional suffix _onnx. Defaulted to None.

  • optimize_model – Whether to optimize the ONNX model using ‘onnxoptimizer’ before saving the model. Defaulted to True.

  • framework_kwargs – Additional arguments each framework may require to convert to ONNX. To get the doc string of the desired framework onnx conversion function, pass “help”.

Module contents#