Over 4000 free audio and video lectures, seminars and teaching resources from Oxford University.
Skip to Content Skip to Navigation

Neural Networks and Deep Kernel Shaping

Loading Video...
Duration: 0:55:17 | Added: 05 Apr 2022
Loading Video...
Duration: 0:55:17 | Added: 05 Apr 2022
Rapid training of deep neural networks without skip connections or normalization layers using Deep Kernel Shaping.

Using an extended and formalized version of the Q/C map analysis of Pool et al. (2016), along with Neural Tangent Kernel theory, we identify the main pathologies present in deep networks that prevent them from training fast and generalizing to unseen data, and show how these can be avoided by carefully controlling the "shape" of the network's initialization-time kernel function. We then develop a method called Deep Kernel Shaping (DKS), which accomplishes this using a combination of precise parameter initialization, activation function transformations, and small architectural tweaks, all of which preserve the model class. In our experiments we show that DKS enables SGD training of residual networks without normalization layers on Imagenet and CIFAR-10 classification tasks at speeds comparable to standard ResNetV2 and Wide-ResNet models, with only a small decrease in generalization performance. And when using K-FAC as the optimizer, we achieve similar results for networks without skip connections. Our results apply for a large variety of activation functions, including those which traditionally perform very badly, such as the logistic sigmoid. In addition to DKS, we contribute a detailed analysis of skip connections, normalization layers, special activation functions like RELU and SELU, and various initialization schemes, explaining their effectiveness as alternative (and ultimately incomplete) ways of "shaping" the network's initialization-time kernel.

People:
Copy and paste this HTML snippet to embed the audio or video on your site:
Copy and paste this HTML snippet to embed the audio or video on your site: