from mixtrain import EvalConstructor
Eval(name: str)Creates a reference to an existing evaluation. This is a lazy operation - no API call is made until you access properties or call methods.
| Parameter | Type | Description |
|---|---|---|
name | str | Evaluation name |
eval = Eval("image-quality-eval")Properties
| Property | Type | Description |
|---|---|---|
name | str | Evaluation name |
description | str | Evaluation description |
config | dict | Evaluation configuration |
status | str | Status: "pending", "running", "completed", "failed" |
metadata | dict | Full metadata dictionary (cached) |
Methods
update()
Update evaluation metadata, config, or status.
eval.update(
status: str = None,
config: dict = None,
description: str = None,
**kwargs
) -> None| Parameter | Type | Description |
|---|---|---|
status | str | New status |
config | dict | Updated configuration (merged with existing) |
description | str | New description |
eval.update(
status="completed",
config={
"results": {
"flux-pro": {"quality": 0.92},
"stable-diffusion-xl": {"quality": 0.88}
}
}
)delete()
Delete the evaluation.
eval.delete() -> Nonerefresh()
Clear cached data.
eval.refresh() -> NoneClass Methods
Eval.exists()
Check if an evaluation exists.
Eval.exists(name: str) -> bool| Parameter | Type | Description |
|---|---|---|
name | str | Evaluation name to check |
Returns: bool - True if the evaluation exists, False otherwise
if not Eval.exists("my-eval"):
Eval.create("my-eval", dataset="results")Eval.create()
Create a new evaluation.
Eval.create(
name: str,
config: dict = None,
description: str = None
) -> Eval| Parameter | Type | Description |
|---|---|---|
name | str | Evaluation name |
config | dict | Initial configuration |
description | str | Optional description |
Returns: Eval
eval = Eval.create(
name="image-quality-eval",
config={
"models": ["flux-pro", "stable-diffusion-xl"],
"dataset": "test-prompts",
"metrics": ["quality", "latency"]
},
description="Compare image generation models"
)list_evals()
List all evaluations in the workspace.
from mixtrain import list_evals
evals = list_evals()
for e in evals:
print(f"{e.name}: {e.status}")