Optim¶
toydl.core.optim.Momentum ¶
Momentum(
parameters: Sequence[Parameter],
lr: float = 0.01,
momentum: float = 0.9,
)
Bases: Optimizer
Stochastic Gradient Descent Optimizer
Init the SGD optimizer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parameters |
Sequence[Parameter]
|
the parameters that will be optimized |
required |
lr |
float
|
learning rate |
0.01
|
momentum |
float
|
momentum coefficient |
0.9
|
Source code in toydl/core/optim.py
69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
step ¶
step() -> None
Run a sgd step to update parameter value
Source code in toydl/core/optim.py
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
|
zero_grad ¶
zero_grad() -> None
Clear the grad/derivative value of parameter
Source code in toydl/core/optim.py
84 85 86 87 88 89 90 91 92 93 94 |
|
toydl.core.optim.Optimizer ¶
Optimizer(parameters: Sequence[Parameter])
The Optimizer base class
Source code in toydl/core/optim.py
13 14 |
|
toydl.core.optim.SGD ¶
SGD(parameters: Sequence[Parameter], lr: float = 1.0)
Bases: Optimizer
Stochastic Gradient Descent Optimizer
Init the SGD optimizer
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parameters |
Sequence[Parameter]
|
the parameters that will be optimized |
required |
lr |
float
|
learning rate |
1.0
|
Source code in toydl/core/optim.py
31 32 33 34 35 36 37 38 39 |
|
step ¶
step() -> None
Run a sgd step to update parameter value
Source code in toydl/core/optim.py
51 52 53 54 55 56 57 58 59 60 |
|
zero_grad ¶
zero_grad() -> None
Clear the grad/derivative value of parameter
Source code in toydl/core/optim.py
41 42 43 44 45 46 47 48 49 |
|