What’s s going on inside ConvNets?
1. First Layer: Visualize Filters
卷积网络第一层特征以原图像的特征清晰的展现出来:
但是后续层的输出无法直接理解:
2. Last Layer
最后一层(全连接层)以最邻近算法得出物体类别:
用降维算法得出结果:
人脸检测特征激活值示例:
遮挡实验,用于检测图像某一部分影响识别结果的程度,右图中,像素越红影响越小,越白影响越大:
Saliency Maps
用类别得分梯度(最后一层)得到Saliency Maps,也可以看出像素层次的不同影响:
进一步,该图可用于图像分割
3. 中间层
3.1 Visualizing CNN features: Gradient Ascent
找到网络中某一神经元的意义:
构建最大神经元响应图步骤:
改进算法以更好显示:
中间层的最大神经元响应图:
多目标:Adding “multi-faceted” visualization gives even nicer results:(Plus more careful regularization, center-bias)
4. DeepDream: Amplify existing features
Rather than synthesizing an image to maximize a specific neuron, instead try to amplify the neuron activations at some layer in the network:
code:
结果图:
5. Feature Inversion
利用不同层进行图像重建:Reconstructing from different layers of VGG-16
6. Neural Texture Synthesis
算法步骤(没看懂。。。):
Reconstructing texture from higher layers recovers larger features from the input texture:
7. Neural Style Transfer
合成流程图:
效果图:
Resizing style image before running style transfer algorithm can transfer different types of features:
Problem: Style transfer requires many forward / backward passes through VGG; very slow!
Solution: Train another neural network to perform style transfer for us!
效果图:
One Network, Many Styles: