Validation
Validation
pydantic-config
Validation(
dataset: Dataset,
lr_schedule: LrSchedule | None = None,
early_stop: EarlyStop
| EarlyStopMode
| str
| None = None,
save_best: Path | str | None = None,
dp: DpValid | None = None,
tensorboard: Path | str | None = None,
each: NonNegativeInt = 200,
trigger: str | HookEvent = STEP,
)
Configuration to perform validation during model training.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset
|
Dataset
|
The validation data, as a |
required |
lr_schedule
|
LrSchedule | None
|
A |
None
|
early_stop
|
EarlyStop | EarlyStopMode | str | None
|
An |
None
|
save_best
|
Path | str | None
|
A path to save the best model checkpoint. If None, the best model will not be saved. |
None
|
dp
|
DpValid | None
|
A |
None
|
tensorboard
|
Path | str | None
|
A path to a TensorBoard log directory. If None, TensorBoard data will not be recorded. |
None
|
each
|
NonNegativeInt
|
The validation frequency. |
200
|
trigger
|
str | HookEvent
|
The event triggering validation. Can be a |
STEP
|
EarlyStop
pydantic-config
EarlyStop(delta: float, patience: NonNegativeInt = 3)
Early stop configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
delta
|
float
|
The smallest variation which is considered an improvement: if the relative improvement of the monitored quantity is less than this value, the early stopping will be triggered. |
required |
patience
|
NonNegativeInt
|
The number of consecutive times that the early stopping criterion must be triggered before actually failing. |
3
|
EarlyStopMode
Enumeration class representing different early stop modes.
Attributes:
| Name | Type | Description |
|---|---|---|
PRECISE |
The early stop will set in later, leaving the time for a longer training, and potentially more precise output. |
|
NORMAL |
A balanced mode. |
|
QUICK |
The early stop will set in sooner, for shorter trainings. The training may be interrupted before the optimal number of steps. |
LrSchedule
pydantic-config
LrSchedule(
factor: Annotated[float, Field(gt=0, lt=1)] = 0.8,
patience: NonNegativeInt = 3,
threshold: float = 0.001,
cooldown: NonNegativeInt = 4,
lr_min_ratio: Annotated[
float, Field(ge=0, le=1)
] = 0.25,
)
Learning rate scheduler that reduces the learning rate of a given factor when the monitored quantity reaches a plateau.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
factor
|
Annotated[float, Field(gt=0, lt=1)]
|
The factor by which the learning rate will be reduced, must be between 0 and 1. |
0.8
|
patience
|
NonNegativeInt
|
The number of steps for which the condition must be triggered before actually reducing the learning rate. |
3
|
threshold
|
float
|
The threshold for measuring the new optimum, to only focus on significant changes. |
0.001
|
cooldown
|
NonNegativeInt
|
The number of steps to wait before resuming normal operation after the learning rate has been reduced. |
4
|
lr_min_ratio
|
Annotated[float, Field(ge=0, le=1)]
|
The ratio to compute the minimal learning rate, i.e. lr * self.lr_min_ratio. The learning rate will never go below such value. |
0.25
|
DpValid
pydantic-config
DpValid(
eps: PositiveFloat,
loss_clipping: PositiveFloat
| tuple[float, float] = 1.0,
)
Configuration to perform validation during model training with Differential Privacy (DP) guarantees on the validation set. Importantly, only the global loss has DP guarantees.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
eps
|
PositiveFloat
|
The epsilon value of the DP guarantee on the validation data. |
required |
loss_clipping
|
PositiveFloat | tuple[float, float]
|
The clipping value for the loss. If a tuple of two float, it is interpreted as (min, max). If a single float, it is interpreted as (0, max). While max must be positive, min must be non-positive. |
1.0
|