-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can the weight file be used directly #1
Comments
@okmmsky888 Yep, they are already trained and they can be used directly. No need to combine with VGG-16 weights. The two documents is because they were fine-tuned for both datasets REPLAY-ATTACK and 3DMAD, separately. It was all done in theano. |
@OeslleLucena hi ,I loaded REPLAY-ftweights18.h5, reported the following error, do you know where is the problem?The following is the error message reported ValueError: ('shapes (1,25088) and (4608,256) not aligned: 25088 (dim 1) != 4608 (dim 0)', (1, 25088), (4608, 256)) HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. |
@okmmsky888 It's a mismatch of shapes in the network. ValueError: ('shapes (1,25088) and (4608,256) not aligned: 25088 (dim 1) != 4608 (dim 0)', (1, 25088), (4608, 256)) Are you loading and using the predict or that error appears if you just load the weights? What commands are you exactly using? Could you, please, post it here? |
@OeslleLucena Yes, I was making a prediction when the program was throwing an exception. When loading the weight, there is no exception. I did not use the command to start running, I was running directly in the eclipse, the following is the program: ####################### Where the program starts ##################### def load_model(weightsPath,img_width,img_height):
def read_preprocess_image(imgPath,img_width,img_height):
if name == 'main':
######################### Where the program ends ####################### |
@okmmsky888 Now I see the problem. I forgot to mention on github but it is explained on the paper that the trained models are for size 96x96, since I did pre-processing with a face detector. So, when you upload an image with size of 224x224 it won't work due to the fact that weights from dense layers are shaped based on input size of training images. Therefore, what I recommend you is to resize you image to size of 96x96 before test it on the CNN. You could use skimage lib for that task. Here is a useful link http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.resize |
@OeslleLucena I adjusted the image size to (96,96), the program is running normally. Are the corresponding labels for the two categories: 0 and 1 .0 = attack, 1 = real? I got a few photos to test, using the real photo, in the two models on the prediction results for 1. But with the attack photo (I use the phone against another mobile phone screen shot), in the 3DMAD model is predicted to 0, in the REPLAY model is predicted to 1, this result is provided with my photo scene? |
@okmmsky888 I set 0 = real, 1= attack in both models, REPLAY and 3DMAD. 3DMAD is for database contained impostors using fake masks and REPLAY-ATTACK has more types of attacks. The models that I provided are for part of the databases, not all, since I needed images to test. I would say you better trust on REPLAY in this case, but remember they make mistakes too. Other thing that need to be mentioned is that you can train your own model too! Hope I answered you properly |
@OeslleLucena Yes, I am going to train my own model, based on the CASIA Face Antispoofing Database and the REPLAY-ATTACK database. I am not prepared to just cut the face area, because the casia database contains some of the hands of the attacker's mobile phone, hand-held photos and other features. |
@OeslleLucena I used Dlib for face alignment, cropping. Tested on your trained model, the effect does not seem to be ideal. Is it my way to have a problem? |
@okmmsky888 what do you mean by "does not seem to be ideal"? is the CNN giving a wrong classification? Also, from which database this image came from? Remember that tests between different databases are a bit harder than what I've done. |
@OeslleLucena Yes, CNN classification is wrong. The test image is the face image that my phone normally shoots |
@okmmsky888 I mentioned before that test with different databases are not easy and what you're doing is not correct. Therefore, that result is expected . |
@OeslleLucena Is it only easy to test the data from the test set of REPLAY and 3DMAD? Other data to be tested, the way is not the same? What should I do to properly test other data? |
You have to do a training step using different databases to properly classify images from a different dataset and validate your system. I would recommend you to search about inter and intra databases test. |
@okmmsky888 take a look at this paper http://biometrics.cse.msu.edu/Publications/Face/PatelHanJain_FaceAntispoofing_CCBR2016.pdf |
@OeslleLucena Okay thank you |
@okmmsky888 you're welcome! |
@okmmsky888 I recently study this articles and want to achieve the text results, but there are some problems, after reading your interaction with the author, i know you get the result, so I want to ask some questions about the experiment. Will the author gives the weights can be used as a training model and then enter the processed image test? Thank you very much! |
@myselfsuperhero This project has not touched for some time. If I remember correctly, the weight given by the author is the result of some data training. |
@okmmsky888 the anthor in the first message said that the REPLAY-ftweights18.h5 and 3DMAD-ftweights18.h5 are already trained and they can be used directly . So if not,What should I do next? could you give me some advice about the process? i am a little confused about this.Thanks for your generous help. |
I want to train my own model. I put REPLAY-ftweights18.h5 path into |
@AblAzhidin Same problem.... for k in range(f.attrs['nb_layers']):
if k >= len(model.layers):
# we don't look at the last (fully-connected) layers in the savefile
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
model.layers[k].set_weights(weights) May use update:... @okmmsky888 May I wonder which version of keras and tensorflow you used in this project. Much appreciate. |
@ @myselfsuperhero Sorry, I haven't touched this project for a long time. It's too difficult for me to remember now. |
@AblAzhidin Sorry, I haven't touched this project for a long time. It's too difficult for me to remember now. |
@yyyreal Sorry, I haven't touched this project for a long time. It's too difficult for me to remember now. |
@okmmsky888 Thanks for your reply. |
@yyyreal Did u get it to work? |
I just replace the following piece of code for k in range(f.attrs['nb_layers']):
if k >= len(model.layers):
# we don't look at the last (fully-connected) layers in the savefile
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
model.layers[k].set_weights(weights)
f.close()
print 'Model loaded.' into I am not sure whether I am right, but I've followed the documentation of Keras |
@yyyreal 大哥,教哈我怎么用啊,急,我用不起来 |
@CacheTechLtd 你要怎么用???我试了试,这个基于CNN的算法对摄像头要求比较高。训练准确率很高,但是实际使用的时候,因为摄像头跟记录REPLAY-ATTACK的摄像头不一致导致数据特征分布不一样,所以效果很差么,除非自己采集数据并在相同的摄像头下面测试和运行才有比较好的准确率。 |
@yyyreal 就是想运行起来看看效果,不知道怎么训练和运行。一直报错。在另一个issue里,有位同胞把他的代码分享出来了,我去试一下。https://github.com/GAVANXU/anti-spoofing-keras |
@CacheTechLtd 我简单看了看 跟我的代码没什么区别(我也不知道我改了什么)主要就是把keras的相关API从1升级到了2。 但是我必须得说,这个算法没什么泛化能力。 |
@yyyreal 我有一个别的公司的试用产品,双目摄像头的,能实现活体检测。但是到目前都没有发SDK过来,所以想在网上找个项目看看。我在youtube上,看到好像有用什么色彩算法的。效果也不错。 |
@CacheTechLtd 传统方法收到光照,摄像头,动作的影响较大,CNN的框架在arm上面跑不动。我试过很很多活体检测的考勤仪,单目双目都有,都破解了。 |
@yyyreal 刚刚我又通了一通电话催他们要SDK,然后对接人提到说他们这个已经能防面具的攻击了。。但是我这现在没面具来测试,就测了照片和视频,效果还蛮好的。。这个收费比较贵,有没有免费的分享一下啊。能防简单的照片和视频就行了。 |
@CacheTechLtd 抱歉 我现在没有这样的资源 随便搜一搜论文复现一把 都是免费的 |
Hey gud work! Can u please send me the prediction code and also the trained model if you have trained your own model based on the CASIA Face Antispoofing Database and the REPLAY-ATTACK database? |
@OeslleLucena
Hi, will the REPLAY-ftweights18.h5 and 3DMAD-ftweights18.h5 these two weights is already trained weight file? Can be used directly, or need to combine with VGG-16 weights and then training?
There are two documents, is tf, or th training out?
The text was updated successfully, but these errors were encountered: