layered.optimization module

class GradientDecent[source]

Bases: object

Adapt the weights in the opposite direction of the gradient to reduce the error.

__call__(weights, gradient, learning_rate=0.1)[source]
class Momentum[source]

Bases: object

Slow down changes of direction in the gradient by aggregating previous values of the gradient and multiplying them in.

__call__(gradient, rate=0.9)[source]
class WeightDecay[source]

Bases: object

Slowly moves each weight closer to zero for regularization. This can help the model to find simpler solutions.

__call__(weights, rate=0.0001)[source]
class WeightTying(*groups)[source]

Bases: object

Constraint groups of slices of the gradient to have the same value by averaging them. Should be applied to the initial weights and each gradient.