Differential privacy
DpBudget
dataclass
DpBudget(
eps: PositiveFloat,
delta: Annotated[float, Field(ge=0, lt=1)],
)
Differential privacy (DP) budget.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
eps
|
PositiveFloat
|
Epsilon component of the DP budget. |
required |
delta
|
Annotated[float, Field(ge=0, lt=1)]
|
Delta component of the DP budget. |
required |
DpStep
pydantic-config
DpStep(
noise_multiplier: PositiveFloat = 1.0,
max_grad_norm: PositiveFloat = 1.0,
max_batch_size: NonNegativeInt = 0,
)
Data for differentially private step.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
noise_multiplier
|
PositiveFloat
|
The ratio of the standard deviation of the Gaussian noise to the L2-sensitivity of the gradients to which the noise is added (How much noise to add). |
1.0
|
max_grad_norm
|
PositiveFloat
|
The maximum norm of the per-sample gradients. Any gradient with norm higher than this will be clipped to this value, thus limiting the L2-sensitivity. |
1.0
|
max_batch_size
|
NonNegativeInt
|
Maximum size of the physical batch processed during computations. It will not change the size of the logical batch. If 0, no cap is imposed on the physical batch. Notice that due to Poisson sampling, the logical batch size during differentially private training is distributed according to a binomial distribution. |
0
|