Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is my experimental cifar10 result far worse than your paper shows? #5

Open
jianyuheng opened this issue Oct 29, 2018 · 1 comment

Comments

@jianyuheng
Copy link

jianyuheng commented Oct 29, 2018

Hi, Sir!
Thanks for your excellent idea! I download your code and plan to recreate your experiment results. But when i set the parameters with
'sh scripts/cifar10_svdd.sh gpu cifar 0 adam 0.0001 150 0.1 1 1 0 exp 3 1 -1 ' ,
I got a bad result
Train objective: 1.69638
Train accuracy: 90.02%
Val objective: 1.75854
Val accuracy: 88.90%
Test objective: 3.78288
Test accuracy: 31.02%
Test AUC: 60.71%

I think that something wrong happened, but I dont know how to improve the result. I am looking forward to your reply!

@lukasruff
Copy link
Owner

Hi jyhengcoder,

thanks for your interest and the kind words.

My guess is that this is the result without AE pretraining, i.e. Deep SVDD network weight initialization from the encoder weights of a corresponding autoencoder?

You can use the in_name argument of the cifar10_svdd.sh script to load the encoder weights from an autoencoder.

For example, first run
sh scripts/cifar10_cae.sh gpu cifar_cae 0 adam 0.0001 350 3 1 -1
to train the convolutional autoencoder we used for pretraining in our paper, and then run
sh scripts/cifar10_svdd.sh gpu cifar 0 adam 0.0001 150 0.1 1 1 0 cifar_cae/weights_final 3 1 -1
to train the OC Deep SVDD model.

Let me know if you were able to replicate our results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants