Skip to content
This repository was archived by the owner on Jan 7, 2025. It is now read-only.

What is the correct BatchNorm layer for digits? #1433

Closed
RSly opened this issue Feb 3, 2017 · 2 comments
Closed

What is the correct BatchNorm layer for digits? #1433

RSly opened this issue Feb 3, 2017 · 2 comments

Comments

@RSly
Copy link

RSly commented Feb 3, 2017

Hi,

I would like to know what is the correct way of defining BatchNorm with the latest Digits 5.1-dev and NVcaffe 0.15.14

I have been trying the suggestions in #629 and #3347 with little success.

@lukeyeager
Copy link
Member

If it helps, here are the parameters you can set for batchnorm in NVcaffe 0.15.14:
https://github.com/NVIDIA/caffe/blob/v0.15.14/src/caffe/proto/caffe.proto#L587-L605

As far as I know, this example should have the correct batchnorm settings for that version of NVcaffe:
https://github.com/NVIDIA/caffe/blob/v0.15.14/examples/cifar10/cifar10_full_sigmoid_train_test_bn.prototxt#L68-L86

@RSly
Copy link
Author

RSly commented Feb 6, 2017

thanks for your answer @lukeyeager!

unfortunately batch normalization with deep networks +digits/nvcaffe is very confusing.

also no one has commented on these issues for nvcaffe regarding bugs and questions since weeks...
see
NVIDIA/caffe#279

and

NVIDIA/caffe#276

I just wonder if someone has a working template?

@RSly RSly closed this as completed Jun 8, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants