Deep Neural Networks: Algorithms
latest
Contents:
Maths
Neurons
Layers
Convolutions
Regions
Fully Connected
Activations
Normalizations
Regularizations
Losses
Gradients
Networks
Solvers
Models
Applications
Deep Neural Networks: Algorithms
Docs
»
Layers
Edit on GitHub
Layers
¶
Forward:
Contents:
Convolutions
Convolutions
Pooling
Regions
Selective Search
Region Proposal Network
Fully Connected
Inner Product
Activations
Identity
Step
Piecewise Linear
Sigmoid
Complementary Log Log
Bipolar
Bipolar Sigmoid
TanH
LeCun’s TanH
Hard TanH
Absolute
Rectifier
Modifications of ReLU
Smooth Rectifier
Logit
Probit
Cosine
Softmax
Maxout
(RBF) Gaussian
(RBF) Multiquadratic
(RBF) Inverse Multiquadratic
References
Normalizations
Regularizations
L1 Regularization
L2 Regularization
Dropout
Backward:
Contents:
Losses
Contrastive Loss
Hinge Loss
Euclidean Loss
Infogain Loss
Sigmoid Cross Entropy Loss
Softmax Loss
Multinomial Logistic Loss
Smooth L1 Loss
Gradients
Stochastic Gradient Descent
Ada Delta
Adaptive Gradient
Adam
Nesterov’s Accelerated Gradient
RMS Prop
Read the Docs
v: latest
Versions
latest
Downloads
pdf
htmlzip
epub
On Read the Docs
Project Home
Builds
Free document hosting provided by
Read the Docs
.