MixtrainDocs
from mixtrain import Eval

Constructor

Eval(name: str)

Creates a reference to an existing evaluation. This is a lazy operation - no API call is made until you access properties or call methods.

ParameterTypeDescription
namestrEvaluation name
eval = Eval("image-quality-eval")

Properties

PropertyTypeDescription
namestrEvaluation name
descriptionstrEvaluation description
configdictEvaluation configuration
statusstrStatus: "pending", "running", "completed", "failed"
metadatadictFull metadata dictionary (cached)

Methods

update()

Update evaluation metadata, config, or status.

eval.update(
    status: str = None,
    config: dict = None,
    description: str = None,
    **kwargs
) -> None
ParameterTypeDescription
statusstrNew status
configdictUpdated configuration (merged with existing)
descriptionstrNew description
eval.update(
    status="completed",
    config={
        "results": {
            "flux-pro": {"quality": 0.92},
            "stable-diffusion-xl": {"quality": 0.88}
        }
    }
)

delete()

Delete the evaluation.

eval.delete() -> None

refresh()

Clear cached data.

eval.refresh() -> None

Class Methods

Eval.exists()

Check if an evaluation exists.

Eval.exists(name: str) -> bool
ParameterTypeDescription
namestrEvaluation name to check

Returns: bool - True if the evaluation exists, False otherwise

if not Eval.exists("my-eval"):
    Eval.create("my-eval", dataset="results")

Eval.create()

Create a new evaluation.

Eval.create(
    name: str,
    config: dict = None,
    description: str = None
) -> Eval
ParameterTypeDescription
namestrEvaluation name
configdictInitial configuration
descriptionstrOptional description

Returns: Eval

eval = Eval.create(
    name="image-quality-eval",
    config={
        "models": ["flux-pro", "stable-diffusion-xl"],
        "dataset": "test-prompts",
        "metrics": ["quality", "latency"]
    },
    description="Compare image generation models"
)

list_evals()

List all evaluations in the workspace.

from mixtrain import list_evals

evals = list_evals()
for e in evals:
    print(f"{e.name}: {e.status}")

On this page