Apache Singa
A General Distributed Deep Learning Library
|
Apply regularization for parameters (gradient), e.g., L1 norm and L2 norm. More...
#include <optimizer.h>
Public Member Functions | |
Regularizer (const RegularizerConf &conf) | |
Regularizer (const string &type, float coefficient) | |
void | Setup (const RegularizerConf &conf) |
void | Setup (const string &conf_str) |
void | Apply (int epoch, const Tensor &value, Tensor &grad, int step=-1) |
Apply the regularizer to a single parmeter object, e.g., W, or b e.g., clip each gradient if it is too large w.r.t the threshold, https://www.reddit.com/r/MachineLearning/comments/31b6x8/gradient_clipping_rnns/. | |
void | Apply (int epoch, const vector< Tensor > &values, const vector< Tensor > &grads, int step=-1) |
Apply the regularizer for multiple parameter objects together. More... | |
Apply regularization for parameters (gradient), e.g., L1 norm and L2 norm.
TODO(wangwei) implement a sub-class for each type of regularizer
void singa::Regularizer::Apply | ( | int | epoch, |
const vector< Tensor > & | values, | ||
const vector< Tensor > & | grads, | ||
int | step = -1 |
||
) |
Apply the regularizer for multiple parameter objects together.
https://github.com/Lasagne/Lasagne/blob/master/lasagne/updates.py