Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't use weight directly #20

Closed
txthanh1178793 opened this issue Mar 29, 2019 · 6 comments
Closed

can't use weight directly #20

txthanh1178793 opened this issue Mar 29, 2019 · 6 comments

Comments

@txthanh1178793
Copy link

I get a error when i run test.py : ValueError: Dimension 0 in both shapes must be equal, but are 25088 and 4608. Shapes are [25088,256] and [4608,256]. for 'Assign_84' (op: 'Assign') with input shapes: [25088,256], [4608,256]

@Rasoul20sh
Copy link

Same issue...

@AayushShah25
Copy link

I have an error like ...
Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_4/MaxPool' (op: 'MaxPool') with input shapes: [?,1,81,128].

Please help @OeslleLucena as we are using your best work !!!

@nalinmittal-eclipse
Copy link

Which keras backend are you using? I got the Negative dimension error if I tried to run the code with tensorflow backend.

@SE2AI
Copy link

SE2AI commented Jun 30, 2020

It seems something wrong with the input shape, the authors did not provide the suitable input shape, it always causes the error.
By the way, the model is derived by using torch backend maybe, just add code as below, will solve the Negative dimension problem:
"""
from keras import backend as K
K.set_image_dim_ordering('th')
"""

@lawo123
Copy link

lawo123 commented Dec 1, 2020

you can set input shape 112*112

@pratikadarsh
Copy link

For anyone still facing the issue of 'Negative Dimension', it is getting caused due to a possible typo in the usage of Convolution2D() function calls in the load_model() method. For instance, for the below line:

model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))

It is getting interpreted as kernel size = 3 and stride =3. When stride != 1, even doing padding =='same' is not going to give a feature map of the same size as before. Due to this, the feature map keeps decreasing even in the consecutive Convolution2D calls( whereas in VGG16 architecture, consecutive Conv layers are of the same feature size). Due to this, on line 38 when MaxPooling2D is done, the feature map size is reduce to 1x1. Doing a max pooling with kernel size 2x2 on a feature map of 1x1 is obviously going to give error.

In order to resolve this, I made a minor change:

model.add(Convolution2D(64, (3, 3), activation='relu', name='conv1_1'))

here kernel_size = (3,3) and strides = (1,1) by default.

This solved the issue of 'Negative Dimension Error' for me.

Also, please note that the size of the input image has to be 96x96. This has been mentioned by the author(@OeslleLucena) here. In short, because the model was set to use an input size of 96x96 during training, we're restricted to use the same during inference. Otherwise for VGG16, you can use sizes below 224x224.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants